var/home/core/zuul-output/0000755000175000017500000000000015133712522014526 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015133723240015471 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000301722515133723165020266 0ustar corecoreuoikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD |FYꋴI_翪|mvşo#oVݏKf+ovpZj!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`km(@ kV%g>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&wfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O Y]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI+mj(^>c/"ɭex^k$# $V :]PGszyH(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{v8FHӜ"D$aǽO8'1lfYuB!6!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$kmYEJ|?7Nѝ hR 5~HJ-G;9U]]]U"F&xGI XJ'[jP &8Hp>EC5|JjKi=8T((e%nJ;4Bj(H8\\IXOq(qE`-aEy\E8܃wlf,\B+Vf1(l2ir Cy]E#cW,¾dYi?#;!g0-l|F̝(sO7u2OpuONSyĞ%s =;! nhFLטQwrb5]^jy.A6BQ`{Mn~5UZ Usj&믎>hEU0p9`"U{m幞V(x@|}-Q#}$g9o|&t[_^ @bb"v"+ J \vwdr݃Jo\aȾߨ笩^,Y5q=qc3`Шt(?UxW#43 hDD"]'+|1bp{J նwx5ZQEO<ž[0C2B; bmݴ,qa~sl /3Z,Cy3S" A'Uuu} C*TeG,2kIfeɊG\nb_ŕuo;`Gf#M[`2xK09("qQP?sY V UI@}b:]H6bPYQ:?w"pBu]T/'lQ5YuQMQ- u^E:y9Ja,F2ë2~v+m -ul0Ϻa# VG$A"&0B6 w"ޥ?ZgԴ ֥ ohh Ja2e3qr.08JGX+û+0u@;Ox<:؞Ǝ e0?{`<_etC '言UZAE(T̒*_WygB5-=GKm|Kwł5~5̳]o/G\bZvA\~|FBwGNLIjؕ9I^ȸό%Oߜ8x[gJu)Xzg9)ҵG TN陜uJeSv%n'ݺQIw8W|P"y_4*pT@K*n 45Uhڄ+ݣ܏M4"zFV,LD_i'I)8(%w!hypߵv7\+=  d,뉖$S_؜"/]Kb힨}{zi3e)_|9 < +:Ñ}BيEB*Nweӌ<'%nQK!gy&a !g QZj/z~20[Ǭ^[&0]0dY 'm(8uPru38ta<]9Uw:rBb꾭#R8ixx$! *ܥ `d$-&X*>*ODHfPW_Rκ(}.;B#k 񚕘2}`B' o!1mB8ʬKbbަrx& x|< qsNĢH`CKB+Z"SY:QכdI9dڥy~q|LjK@k1#3+MeEKa8>z\e]7s5MWhUUa4m>p!,YIQu'e 1U^^}ʕFؚvKU;OC5VbtxFsl|~Ɂ'Q4{hm|? lMDX*E0">>aFC<r2FT͆]eQ4ZW7ܠ`=t!& .2mN 78ΐw\. 9)#P-sVDO@nm("+Pᩋ5Vאlh\_g[Q"lf3Pڜ=qgy[c >={;!TEAnVy!kxe-P\ æ(xbR;@L` WYD~ ~jPnl\@tam >=+a_{K涢0R-q 7ee'W|{[cn 3 YUW\F7IJpc;nl}0Қey)ZL2N1,Ӏ}E@351bu uK^]l[Kn-Q#P eGiaLmE`*dyS ! $Ђ%PՍ=λL&@*jp),%GVT?Z(3~g^gUs$`e> P3")Ammk! Z TN>Z9?`@Fvh#ẻ+ol?͵zv't X.I~yWZ[.z.1Pmj}"ft/bprUu5`Oj׵qax,0:KUV\'ˀ)hKɨI2f\dc թ%j&*QfP= D z*O M jMd)fjLk*xe%IS7c:[ކcX/0\K #VʯAyjZ\[".s]XMr8VXN64ZmҚDf.TSXT֔ ыz&c!i+Ż7":21[bub=Cu7V Byjj4d͠NKP""/pXȥ<ҽt9MtJۭD AcX h[4m\(Dofk^x⾯ Oug*q9rUgjZjU{[2ڑbܦA5U>VE Jtq0Yۣ5lд*Icb՜҂ ɂ8f;DU=~#=|tp8}GuN;N[Mg Zs k2 -QnFĴ#FዼH ]<-V zuf)t'E$C\G|tjyAV<V{4`U08>j/}wiT[ VU9BEKVoAWx0FUܟ\fg7nGִJºœuCkfu,TY̢52*Mx [=ͱ QE*]c>Lk_и9֣ I==β:ւIRFFg7Ր0"U` [Q!!̋|Ro)ڗۼ'M JOMɥűU5 \4=1Hߏ۵Gd2@xEhd$> 5; wnJ'n=%8a۷@yGvfo}ہMfov9s4B8{ԛ A؋dD0XY/ْM'&g54l[bxԇJU\~|F8)h/7kmp{c[D6{IL&"%9Ԗ|%; RJ^4ǒH~ux^Br=i Q$P-^?l2" irU8ΣL(,'׏"h^}LzjפiG*k'Jd{ 15%]`~JN*gIT&w?xFwEE); @];_>P?"n'6mR;Z˚ǶuND פ"::Vy;{paH}x_-0 ^Na_'3yxNx#9cn݉[;6ӑ q'B8,wk,.˝JόpҩkpmIG&K!_ ; wM[% Y@GmU@:8Wp pv:u(&.K>2DP bQ:q(H7ePӺnـ%I~:ZNIxad@wgJ1 h'[A0KgϑQ:sagHH~(^ka~bp2]x~Q& 08uѺu,%+K`D/+}h0D_^ܵ/!(}A0A&ADBu{gsXK‹zN0:up8=MT@u+0Ii+0Qv)0Xxp0bF?_@~X/nM6yHnv07QX6 O͈ێ/`$A_"ؗ&!1d>;OF!^Aa8r4B+!zϩ{5ȩAvX1; 'ηS]rj[EN[ ?0'~ p/KvnbQg߿0).^?+92Uqf0p=T{ [ܱYʹCaZ:JsQuq;~#݆r~Q{=__Ki>䡁 ڝfS,nd)z .E'9c.%31HdPWd~#8Y%b9OXyT$\-92s3OUT^Aԓǫu"utG#oKzEQVIŸ5:4T{KbEpά\5ֻY]X5v W3%+i"SìDLm8R3$;PLs\/wֳM/Y"f66d6f;".Id U:6zBA^X6ckrGg\83q /+L~sG끜 s lm7%&O`݀(~ZZHfE|( G"h]hG̏7i:WPfDMt(qѲURU]Utf xIX)!֭7s '{@HG۾D f^4p N |d{)~8Jyu.VTM}׫2 6 =Ht 8G拁$c>mw>/IwHl \ӯOYMfYA`'ADC+i X ƺ|*rz˅{w5z5 ݪmjFzyp!b,o刂pխrWnTUЈ%޹_` =QXjE9?,;z5vPZdw{moiޫpg +~9EYJ i} /ӥ|`w|#wNLزtVW2t ϵ ͓?_I6LOZCC|*oqAqG)Щ,O4\jKރWwo1*lz-bםP> x:/4&Gr0tIB'&+ }'QK'Xx$' \ M 7c_[gW)(W֌?Pmu`HVxҲ䗨A4Ӭ?kmjڪ1p7 H7OL;֭g~5zߥnͥ0q鿿%6o6͸f<߱Lf msX )4p "`6_XN`{{8w9غf>żsս3vN:׍|wtS`qWeRմJ J7'>PHB&RIGt*}bv+ݨ_ovF=}j>s7]N-ۀX9iGR6 Y'ـPgsBT#jiOy\ ~Y=FStAc꡹jbT4V_ujP#V5Cg#ZC}4Zi+{7)#`(""Ro_[xVhNk˪wv%IU}k4j(V J(ƭmpmIA[ `#Kښ~l}JYRcBlG_ex릯<&t-uSXa*jeE}w5[|RqP yg7kr:Kh Kg,-=`T-ܛIEL{ПYSlkK&&8n&հ]e&!}Vdz(EӥH7bcD0Bo(NrVi<*u6LYߔ1?wʄ'CxI64.XQb,ڈB0EO9sa#w>v!5pDÚ5Pv/m~d*'ܥIZu G{DR~vH&AF"_tXm Ңl}ir*HxD*royOSܨmT;.m?nR]c)~g9V7|z~蕻m׾Ĥ8gZ3!z٠6^:tay;bx-(EQedmOMf=̱Ɍ'Gdy1|f $p  v<k+>Oߓ}vךN| el!8Aଚ0:5>ȞfGQrJ- 5 &0y/^hJ9rq ůIs|9jT[(JFP}@mP`T3*m' CnZ*7rsQ@z ^__@戴bL&ckdz49Qbr4<_NZN-]Xӽ4fk2jAc~/h w .*2.'"E:2~t}aNڮR($z6K$\&X]OS8!l 0_RAuS^8j0LV!yF"\)wQIK=o) d#[-\,+!YoEҫ:v$.Aȅe+Qg:̒nlАjYc=>>MBQͬr}r%PEH<* RR%f4_=sWzK n3YvA3]g!8T2GcՂ*nƛ/$~Y[)7x,DJA,eP-4UnvI 4=gLJMKoB1IH+rp9PX,U"csmNh+AiO^00`kĔ N!|V礱3y߷!xQP@fU(B TFߐa{xxHt_h EDǼ9n2GB%vgZ&~zlrS:ˤ '(ؒd?)M B,i{۴G\!KW8|nIScsdpw \X߾h\iZGRI}̼ɸDpZ7?͞ MηyCgYJ@c3bE!r%2?"QWyΎS9oMdN+r%jg*EAf)ô@jBf0X_\߱fiMބ[?!uĬcdΟ^HG= hz*I潮LdF5Ռ%~IHJ>mmpZtl  MH a74{eJL \pyY|ݒ!LZlMK ? p&BT%pH"H 4&׼֕) ՗{9ϐ_Z.iݎfդ;Z\,{#|k2`}OYgc. L]1vq;b$Ꞻn]!t ¨B *&ۯ*u]K0kf)7x:dn4cX!jZyoE,OkPkK#&19L`F .g^gy>uM-\ڋ<'7j ]llH$ fαBjT[q<{I2^Y)&0rE=Baf}Inң܄(=ګ-0BW<5@HuPזHlzD %LsOqnBpQ`TϏ |\up\M}^CaNmzcW37˘+5]/oGכ|psk2=MS+ ̊5!ah\SNln^DuZz;O Rӣ5n]7>YY7~(taHpvVӈ+=A|&QK"#J%-'q;\.8v j HʚYES VB>=**2=kF+ѪłŘ!~Ãr[EMf( 9+!scՅ?0X0@!vGU_'ԣUB[)\D6e*A5Lc74 |RDbzf@y/_X~$Aq[ Ól~{}ٓty=#aYoVjm{_S3r+,d/<|XS gRi்^7U!/kȔL"1n|xpd4mh-gDNPl ՃnFYڈ $qƌ\NDb5d1al[$1pi>Xty&qAE{~;q$֫YK8"j]˼u(b E\m3_I4nvB˘HOz$8f+91q8Dtj^zQ˫KU IV贝Uwzjݿ7˧ŊBvn;v s'WFsGO#8rFQ np8I?0LY =?[bQ8%au{YQRYKWaD}G5z]{mJDQjòJ%>>a,][Khof?>-O;Vb}\kY^P|['X̌<ު&(ʎ+^IE N1ڸ-#޺FzR*cH_S4'޵k9--~z&3ˡMsj34v)ڽebGXޒDv{BV,o N\7q iYDAY=!N2^ !Viվm~:7;s8ExIkyF9A0榟7\^ GEo+K? YmR>tG1 999`m$K$IT۝-Wwuq!' nQdmC ( R}G?v˵-ǡy1n(w~_R@H>?AKirn.&~OϽO/{ \K)p٬GjD橳24z'5~t2d&\ycAzIf+\O.f2%?;x;8  A}BWc4|09d_}*m|#5ѬW?D;ā$S;NLj+pw/8!< C&MW0\"5hz"%:&b . ӤXINu _IK 0 + a'!w5kFOӌf[<pNB*4PS[/X'|2&~?)|(e)#OQ|:ʡ$rbRl /Pj9M1*`(zTZCgv4i9 RY(yvytڽ?E^QQûx,Rn'aY\ՃOz{:ZrKH *dXf{[~26|rZ\vpç+mWfb?NXkQOJaZۻq߫Ų[k hk9*A&0K5"De3".' o{g5~STAӅqK;iEI[g m~;8v ۇ1Fhs:5c<cL;K^yFriPخJM>G Nu>_rq&BOp͌n3Jz@nzX;gtDa[@,\r(ò+ ݲ@DfsϪ`'Kܚ=oߦtg8e@_q{fx#{2ygS\|ݱl f9IJ_R9ŸKEA`ѝ 2m Ү0*hp[ OߖSh"ڥi76rSr/OOViA+ΤI3S{+ GT Lz\}㦂/{FX@Gꃗ1zDRH=<}.Fp*X 2tmK37퉗qjӮP6\c5,|ꦚu{,GA+V1j#2ka9,}"hշoqìw/ՇaZZ6Ykҳfr5Nt[̚zP|kK<}7d"Ζ5]It.܄5_PyUhm`=0S\"QvA}ߘ2g1?9_d^!v|(@Z <(Cn|698vxp3ɗ='0RO3? m BmMVxwFj XD2!B׶z+J%"xi8FmrA b6ǼxX|;6@,k\t[RWϗcCItQp[߽tVL~&Š՗cy1!CkjM'4k* ƩJ>X{Sno>C, 6 L3u3x\`h ⿦F#;u-4 SkƲV1RDK%G(թZN1KNCyl-Kq+kFۆZiFyJJA1fzkt}`ee-Arm0ˠH!BSdA+Om&̄#V ?\5guR=~Yra$pYjӰ=z#\-q\qOҔgmjmX*,wUw̄"Wlbnf[lJ xVQcx{*"Fj,\,ч0e\ͬmC̬m63&1їPS_7sYVb>0]17*33ObC팧J6}䧦1ȁRzIt>I(A/{';_~>J{=0~_|=^ޕ?o| ?@z! {ed ӓ_}1楮GL{S37]q}lneǣ` pګ'_4v3X`#K}bBÿ}i&&?5(${x7Q10ݒν[_~҃>I@oIE[\qć+yxvsEb $e 0 O:eRԕ,{oG L9yɲG0}&;<+] ;ɇs?X/IPd?6k.G# }!T*DL7:u+CfRC; WW'4*8bϻpu~2*^ W h} Wra=&g_ A-\mDZTWAF/y&q`DaTJ.qK+p@=+5:1$Hb4^x6t3<WտnnWݹ4u]޿d1swm1| |ˬ*CBazy|.ZsJ" P 0 \w%`7k96aZ{ [mki\>߿!"n㊣pX\M6ܶ`ZhDv̷F(rDMs2Jp"Y|(sx3ʬWP<;(PK 9{.@ !10izJ22:OO1S)VfP JuxZ (z;®gBP-EB3Ǖmq"L; 򝥟>!Do:&SPw5gVZwYfxw#ZN}fTQ1R pJȳeӔ)Ğ1;aof1jx .R@Vfжg(DNhZɂ˄Z3OiRN$Mp A.)6<)7zFK?%J|/]8nqzaNma8^MSJt4Hq9 QZ/Z| d֨ܡ$Kx\QZʲx)j'moY "=LzD $QeYrłJD1tLXoXgZxܶ_w/ ~z& M&SIe|㱧=iZd{d=#lzIJ,}$;!O!j{}9:;ήz6R S$1- y1me#SRrBLRy/}t$T[T8/S-M'9Nxm^X)(v0UQ["hc_m#zpu;kLyWƌ`0I9LXVՌr'jmRd$Q*1HΘMtOyz09.&Yy $Ý!Q%- A֓K!DV18zżA$ZҌr':‡CGHJ*4{,VWaU8ʘM`&cO%QCd8!! IZ8axî"C%C)PJ U1uC/Q8/Q 5UƐ$%ͺvj VٿZ0%LT k-Ezp[V. h[UQ5 ~t9'1H޳hNXFvxX(sa6ZDž`($ 1a0M6Ab=@& ҡSxPnXD8K峿IqFJU%(K+b1c'Zʚ -W @zqG,[>k&\% j]Aypj(qi,vTkanpdt}j5,nՁn}Q`vT[.kwZU@ʼn۬ 5az3PaYB݃쮛6t7Ω(qEqY9?͞w:'iLMF`E%x7!({qn'_yU70zgG0Lahܞ^xP+YJ0ic|@m kߦC?r|5p6gf,?KcڠHi6eiÈPmm}ٶ5{0nG-Y/MT[~nb^e M|PE/9DZv+1ckwS4vzs`$xM49r_H~/߷{zuYM n-v~ALƾ%W]GJ\4k}pO) STms:] CwpÕ%E@.pC>j[O9?Dz3'דyvy|j:O޼#gMW|\ QVm@[4ҁ نp զǪon\c;-+J{NRZxֆ@`L;kC g rC|@/{1ec5b 478(Bj@gk&Rqg%=,--ց)1f]v&\h:QgAy CZ%kIf ؈ T74ܩ ?KW>O>t2|Sz`c0!E `O߇wg o#0.u(N<$jvFfVpAX#^RiU).?аJ*wSk\G4Fuc+A'$ׁ?#1 &#Neb*n#F zȝU#P%тi7Ńi5v@OX>09SҧHbXBvw/n3LRmvai=a :wCpY{d+~r^ZKi2\I63tR\,Ɗ'qpŇqCa<~cW~ q~ c̲x65X?&v<>>^ol5\?7Ed<.ÿټs0+S-4,ؕgQ.$%?݃ {{_,42?Eg>^h)7^f&7/4ajAG |ZFqC8,DJ7). 2s7os3,Q|:7a|)@ްCGKv< s^{zy\-H4Zxo8l+"&IObF,Jý&&Wc%kQcJdۘ ag< Jh"))Ǐ6. ':KDUʶVT;QTH 4iEa[Z(ِÐ:EUkQLs-v(Mb5FmPlG([hA4#y $ {UFE^j(-s\C PcohFS$oV(8>4DqbT&kA ď~w|3z+{@ 3zj_mv56y=q[;fPvȫJN< e 'ajQ6 q$Yǩffn HY2(` rCDb);.%6cBLB1ڲ(H)yu(l8q4rl'[oFe7G*| [.1 ս\yz ~a m 5^߱:v#z+w41,spG.ׇtN:$0>5˔hv#+h ל+bݿٻHn$W?a.,fe0; %Z_:JRRYf{2 ~qRr)̙Hx@'gyr)(40“Gg cNDh 0RgO`n:ÊaGE`M; ` 7:g4!rq:_L~ snQ$KBy½$|0C8itZicQR5!i4LsNPDJ%0 -M| NӵOl*fD6KWN;/qY(㴆ṷNO٭]̩,`,Os4p?D2c^ !8Z(J՘&hh٤JٷCvy8qyKg-Ώz7^'S(6];jWϼĐ'DB+U/6DA;gg/]fJr~:KVz #f}&^SN=vSYDPy>_~eivBrsm; G@Q^F=Z|ftA 69}5k@9B)ƅP> SCRdչ,2k)U46fцo)yyY{-V0 ?!H̕<:qA'Hl? e/;q~qw|;AfYfJ'՛>?^~ yW~vTB21ۛV+7WY7pK_v/z hC <Л5ZPylf=*a[NlvsU±<{>fVك7\ϩ:RkG7-j 3 gr@\4ܔ[N <2)k0Tv|[ Qṁw_CBfO]mUo?|ۦwCvo`MTfGO? bKԏFpдִ:di7[vk8\z t Sev[@ۏp;~*@o{_V*Z^QǸNTu NuX:S֫-z@5vS MuX:S֩Tu:TuSkb e:dS֩_d:NZ~ESsbS1 M4bS1@HSil7Y+}p)!3ֿ-6 W󹶌zk~O%{7Ș˻D餹a$R$EgًLݎQ07GfpK99 o['"h8&j𞸘-$.|4>HBwFw, ?[ d #ZdXĴ9Fȳ[N3|7ZrF /M!IH@R$HC AD:ؓFCI>L9'I/{,iW#ϝ1{,X@!wx)gdg3^# cOr-%֣|T,hH&]&,*wEbR[t%35c?G]QA4)IC\Ɨ ꞬY-$ᎻVob?pO')@`" QJkJ:6F޸C-hnSU69ϾK++Њii!g 鋙ۢ/nPju}?us$AGH QbR,ckI>gI&i\ M^FSmbۆnfv̸ɬxHhi ]͉H4qI6iP+=<^~OhqÇzDDI‰x(\uP4cM43.C63EWtQNGsp"%9ɡ%<2 "s W\XOjqZgXPMwħz`~\NN=Sۆ#_ Jc$^zh#t\Uĥ˙Jt($N;P5b6F E]km @:9oOǸ-]z;>;8;**{揋<~77)v 4w7uk?aόmVˇ>*hx)qK:*L3@Fye,'5v(Ĵ 5|0V d2cӇZS:8ؼ}2$ KQ%ƲɑH⠆BqB,jEJ/ZP'VyO3pz}ؓFCIvM($g3@)CPNF˙bI;r잶--UPPXU#Lw#׳l={^O׿>oYAb1 cJW g%7trp[k"Y5|(i-$-=^ѱGpQ6Z4['<@9ԟؤbzQƓ~>A5$,PF&yN|d G5gE/uCoŸV}(IIp%0`AΓu秞'1S׎ؿDTvz A.st}7Q*~ЗԋT;Fq#FnzE׌jZqpt-._ fS1{5~omJTp7/@ X+aQA+71q̫z~*$FO񇔥 d&A9E"B4 ~X5N?`&'%uœ{4]Jߩw:^ 5{n+|3jQ.sYs8M6G*ZyyqeJrtQ *FxTYg/_ZD4;AN3E[𡴕ldyyA*&m*PRJ%AxRg&8OΨݥ3] 5|(iO?{Y{T\YRt,H!$$}Jǔ  1jPRS`ֳ* HSN7e%әzbL,YL:8?!h?$s̿dkE=&@e^,sQa)VE[tmG.g[4 Khp?/m2ѱgz1\U+crI7Axz ˫ʡ|٠]%uGʥ{HSUlq;b Et0XޔZJd@~BoRQ }=l1 |3C"pC8bp"NJ cHpђg%'{b6C Jreu,0QE+Nj, S׫`-iH /"\ER$4`&{`b6 A J+!\GұGz#6ǂ56ψ. x4`HJ!Y?ǘIK5|(Q(2i׽T&'P֒$'R]Llki+џ"'e\PT8kVҗ?D hG¾,,\#f#~w^<O)*wJk: |_(OڪbP#( [Mo ?S. FJQyP΢51x<ʨأ6BVwV=E:&.|"n*} Pvֺ5qh)A͖bq6[MӍ1 U\,TX^Pf`CatQqԨ2*)#/;d`uQqg5: ,|ZpWl{Tl;_7-pbNjWlt^xR!슂{{tQqօD(Hu)$̦O4m\ڠbWnhkc}"-s*d KIX-Q\pzŜOG9Eb*S1az7H rv&Y0r5 9"O Ge+.xRgs XL~ݧ<e&F\o?F/أb4h]A+&-c۟zT =j"cbU}5JB\SsIҙ|9!;[fĻ.# ^thF7F_*+_]_Pݢ^=WϬQYT&YYqdB4s1qUt{ +WNJGx h/(KE~Ee}dN\Kq}T ' V zMX蹟@R0 >{|{|쨨* 7s"@P&{p4OIj96aRvq*S-:<䝬NFtaWtx}TS*jqw0bQ^v[/S23nǹHD2"!ҨgD]4m݅mCEePF)r,}ft]<^'P Y^V7s#u{0V]UN]ULnOQԳIL̉yd1=xD_(ڜ1u1U "gpj6]s]Qza,+Q+ti)w'<4"p^^u,hKI.-=OfB;^Rq7o`pY5M|qu?,dɻ\/B^y62{b>FN)3X̝eML . ۖtx ɻ0'i C{7Ts/9^ұ{REN7JYY,(Z02`ߊ.Jda^4m%=EӋ:lV=^̭BNGe$6"3 gHIuNMqǠhaȝwG{"+V/TZ gj}׼ nytJU;%9VybѪ r+H;Z5Y̊2\x`:*|yu0;yR v a`&#xwncۥ̏$qny-Tq|;3.J'ȩU"XȬOA_%JT F<|l`QF#:J~:YdyYX#5έ,<} $ *weШk-FЅKfݯrZ|_P31gG9y|>jm6]lu׵Q19C-W:G%U9)]2V捲ܖ'A\Ʋ,÷!^`~zs@TȁL"vJsjl{Çu1v+ G g}:tNk3Bii9`oIK e4|]IX*]Ƶ*>`[IYSJUl繺^aY914IS<j401Bs˿{|Qю]ŗ*n ?O"\ ƳA;yS6~_ܯ?i9,OC#@m@oϺMKRFf>̢Q˩CɏQì?14hG/w,rE2׏q`.F+>?/ 40j͇ ,rBA;~+vT˓= 晢!<ad/Ƴj8}wj2> L~{T&ojFG͏B{ ۯqa?l3ga} ćwb [XfT SƄ-g.aeKɹ(hV.7Z뢄UQTI6̟m|!I |#,| 5Ur8V0e~7_~<^Fpr<0Mb)?xl+'F J; }_ 4Z}^ƶ5lwZY=ޗq> 6+C\m œ_m~_)W=ѥ5|_z\i-=MGps-Ӳ-ƣZb-b1OǷ*ȿ܅';{7[WZBnWwX5y~W;b_?ÿ_AXm߹ût[ߠϱZЁY+Eʏ5̉{Е(%b+&a>V߽˩zQ}=gYgtM>17/SXƉ}Fm- ZKe,Mt+`gº0yۚ7CA.A`b٬Im~]ak?NCQ+o:kW -#0ƻKYMtH*o~;-"8ͯ g+K3<&ɉ6FK3}h] (. w;x ^v+PRJl BƔ{&( ik] ꄮEB r k^uH7PSKW̱nEïE|nd Cuq/0E#ZSkEcqD]Wn (.P;{^1's#V/FC0:qlӬô@"h nQE5p7/μ-[,TIJ1lh3qf[ܓ3ƴJg?b'j%n1PdopiC`od"Ku ,lk s|| zd0]X#~+\ЭnQĴ߀"TK' aB'ʏk}yW=ݡMƚ:p7npT:D]!2f8Y.i:SZ9nܿ8<њz(_(hݍ *SzQ6¡܌ghDk5~D1XDnQ8L( HN8/ 1N+ꭒ"(nôހ"o{mR&qn6W:mQ&rp 'COG:"N!slِS JD;Ɉ6{C7Ew7RF8}^55ۧ/2 }ךٙ>udSh|xN/nMuhT^(ǂ-f"Q_[rq?}咲Oa]Eb~ ׆$k|PwnJ3n4}#9uA{e?Vko`eݍ}->{Lx)5=%WP2¥7QP1g8ݯ==k‚QX0)Xw&SP"!31JBf3&wFsZx¾W&^׈"mcqDox`W8B#ZS'?:܍ shD9b&9enMjePWraw{e_ef.du{ɨ@dz`@ +r wDCϩ_x˽2B lXIC.U r_'`hwc?`ڰ~ _-!S{cnױll}.9wj:~rEKmðy cϭE܅_T%Ou_j80}AliJ;'lT bZX lώagFLe,q6f!yfKk8q|zoC8]vY£AT = YRkE2nQR υ"Wp=Zge3 n,?QŴ1\T 4&\RArl,QZ'@/(g3聻_f =kF9ί SM"pĴA)w|6`nqiJ_S'7w&;7gR.!P1@p#%RmJ 9´QY1Ľ"- [[E޳kcbZm0>bmdcRYFRt^= RUY1kU̥"^7NHK+Xѱ')g\R33Ƙa'Wѫs-.hBe٤X9 :Bhy5Z&zbGrD=J`w~)霅(юYi eC-sc"YlHŌ礪m :f%_-eCЉe/bI2O4-h Oa`wI))n` g4$W׾N%kh ۨ2<ór֏0*c VHTMR(!%sBJT؎\oNm REcJs lo-{0mR-tv~4h=ϱwUhpXǑG2/, IJ-0[qLh rѿ9hҵ;I/C^IЙ[ 3@B9MI hjloΗjfZ@0Q!%8jxZJXp.aUs^"Wr ?V,z|Z1#iF[`R^M;BgڏJ0s^akAR{ {Y!"d')tl u٘Sc4(StfZW1W i wp&Vf)mwq)l&usoXo<8i@3[W stk'mƔc >a= YNE.k&l 脴϶Kh8]E(͘0)QwUUD\dD%u4via-b96 Me=ll'a%g^[r tG .'^XrӍKdbx̄ /)w<<)Y-mhyUrY?t-h qR3ffNJ,QZhv\h;AK(VzPOf-ұ,α,8Ua֢1ŒfPC$?Wn~4+1&xI,'0sr9 ]EtX!o1,xMpT ^q- J8|a;K;&Ω E<Bl";ŰcD-# V F$)K)KkXO*'09"S~bJ+*&rh sm$׊gs7W u"ycΡ802c%!e= A`BZ`N*sKZ&28lsyfZ_苝/bkIώyQ)|𥬵d$FT"%1ˎ*\9'A}9 L4'S]4(g=_g'sB+,3Sf(Q @D,+åa*̯%%Șlhb **Qnǃ"`ub%BuJh {'a@ .WW\H\dtk/nA6 XǼH#3F3V4 yQP++3Vܕ >&\u| TmXD7ӻwZoĝ,GUfi}|-f,ȮCqˋm\4uۻ/X$Dj.<0Q:/ݮf QbpUTPWreaAJ[0-E緋qLa(, @vT z1LMxaJ$_WfkLDXoϧkP<{D>D?< kWn qpa;#u7 C.=HŹXYBp1R"+@R9-5 1);!,É@%SoK7YS-ÐbU9go66כW4;!gEO-%\|<ozsMl"6U>Z/zYTFDɗ{>^' -tv CY6mJ٨PpUx`ܢ@I`*U/Hi#^ YVIFhzұc fe,HdH`A0P-u~9 K\m *טD"g&۲x݁ɤ8! j\:N~4/iŨ*Y#WED ɯBGwtZмiEm/#wy[fq#4WuQXVt% |j:V^0lCw3/N`k7& 0teØ+fzTkxBejϞSPE?ģǗ7ѵxF>MWUJpC1̣e1j$*+QoK5rdzzt)U%1z눙3X91/NRigqEX@ ۠Rb&(NŞ$PK; ׻ʋ~`Rg +k35Qd21c؉'GʼnYȍʊ-2V"hXz-b'nb+㥪)j )Y(j9p}E<ăH2`OGda7S4NkTW~Ϫ../6<1TSi7]-LaWLdt8N\MBgG# $/PcDJ8 +_bJ"Kio]7|xTvZbInD Yow3oؐi}~ؑu[WժÙeo.MX@ƿ'w#|)_gr-B8܇y4@4PYXuzSm7ImG@E MA;} ~H^?ߍyیw@c_dzt5ә^?|sxV?=KpN[SQ}]M汉pUsa]!o,Ǵ^M_4ӷV姦l4?6ُyp9-g螿R!l\ CDz:<֐UEc^sXWi+tV?DECh#= F,} EjޭF6uxr7u(V3wְD.iL苇0)j7˃]>1x6Lrs?MbAҷyr3 sѥ` Sobk$-} v[rxt~U`װyhcz{7Jll}jqjiFOމC~2nG_/}qѢqX?նQ uy٘W߁h}>jXtI7kKD>w07GXҡu``Mun)%Bqtyz:)^*h^'ݱeW @Cu=[\]^^a\%?5ӻ+!|E]ΊWqNM^=桊EiK* F"GkUFc:TnK]}.vNoH'4L12Vi-Ii6NE  ]*">Jbw)U U4;\尶Ҙca TKhZ .u]#jħl/ n᜕ܣx%zH"isj#2iF$ v@#ܡڜYi%yTfO#ރJf>W lYP ʑPfvڛYzpd J诋JN@۠iãXeZyb#m}yy8bkJK%7aVu(obz7/B'90x|6_Neɍo_Bi7mdtU4P(Qc`( Z-c8R8E ̌T\cs!8ŤnDv_ke<u|dLI"{!=,^nLs3B䨒Wm{1E29*'Q8l4;+#; j;mkYXx<K_9L3 -T>]3uc!nDD1%ܨ lǧ4q>Xm>^( \<'o?ReL f޹X-Z|ٓ;5ϟpwj)xsCzp ܏̸˨Xa ;nauЍ nz4R졘N +iɽG__>8ZO1i{QdN#"v |s!u(;d_^${*3\>F?>?ݡp}*筩n;zWyӖ^]^%2P +XG f}gkETDhKJu]]X MRĆ@(Şq0%wV }¨P!74]s p[+ ?y}>xPܿF?]b2b2*\)0~!9z o>t&1sdv&f5ߩן3yk]⭳A&]$#^;e !()BGHMΡۡ=$g c@[GfbP;mm$# -l:ےomwa.[SގIhꜵ Qʻy1.]+Z?x'ݞRqc9Npj9c3 ~s (R8#FIx(^]cc7˗Ce-߱h/EwˍVWjPfS[mw4Awh_@(hEUrj"e|!(X DCAWj?x1"P^=r]0&csStYM{hZK)4keks1N8o]y 9tϼ <_IME54k'BLE$"a,׎|7>vBqe˜$ S] zΤːuݢ#.tR;q@p-1A1IQ~}~YN)ysZ!Y7we0qo1a[66!vB18{AiesVBF,ݥ>&Q3Ш |y kpDA|Plp MXrh3gݗgD([9?ye8HJe,\*Ky,9egw dTߏ m8ШGVF;(P|J# : iڝ. trMZٗwîS(bVE0!Wӵ*C28]/h#n_+<Y%Lȹc >2ɆFep $|`ۆT dx5AyYU^6+Qf_G)vȉ QQ Ztqt@qOCA0'պ(( ÝaQR>xC6 # [ɥ9D'Gn74*#psOk2|C`bȔYy`G*P$e6z1W,Oԕ(48`]"l;nQ JFcUjTȗ|}= <Ш $b8ʗC28<8Z.\e4=4jS~x26{hTGV(ΟDeEkШ Q.S.trәu2$-@ɍsޡlhx Q".$1!yC!H0ܔTQFǺ^fh+{6zzU,a/V@B/;;W@z^ލb ΥX; .7&vd2uJF2AѪ8.}G26|˂OĀ,'sh\АBpYdoHf5n73Rƣ0Y۝2 "9tY *9h *d>$i@oUs%9(H/P>r~uQFҎǫbhv?5+dRA2B\ǜ߆%hޠUhۛs!TIKjy-5{RY-6PY5Nty5ֻ9J~PJNs?iC.8@S/b'^sJyIn2o4*SKc L^n2\^8~ZDCun4rL]|cq ,](!TQ⯋_\0C68Z'C|  gȑRH*+CbEkNKvY}2{F/ˊ(h#պҩ0Mpu & h$u.?,wK\ {)yյ;ke Ei{V|1frɺ:2 TW(Q5CbC P(DFAG|h)Ͷ'yUVBG_niq 1)vrȆFep Ķlw|a~RW3+:\e :_cF~ 6ojZ2=/TYus_M[Zzo8'M2Q*p2I`캕1a? aE:"-GC28jR/-)Pixd~?6ѓ°5g%m֜K|YP%N k=$5z/Lrs;i5FVnC2~Ad'PVOp}>Y9.-r) T?]˪\9(`Rm4ށH;sBVpI+~ͭxz=j+yC.Z WȎQf\Ш [pt0Y;7<<M'5H vqrg 5\uvi/Vܠ_Э?_}ȡ/ǫL&Op6~^Z#v|#[ދo` n>-xz*PzYŬ}xZ0 /Ē^ȝ 9 /nI͗c/]QCN5}5*f ~Ǣ(uJˎܮx nV1*f _ŃghCY;GO ɄaytWlfJS'+,> 4Wr=ǣI|G * ꓐ)5 t-|޼J×I_A͇MsUu_l>H (.Wθ?=rRC>)dIᩦ~>vj֬eh`K~c<<d]-pLQg`uthzHeMgcPY|Bof3>O?^iȯ~I?WtΡHs{d8ʭHzL?}'Mȏ[c5je^m 8ˏ}k*_&S}CVqKa\?$~ꍏ.?dˏ4]O?n؆O|v/2x/^]=mFE/w6_oCeM ]uƈFJLE[,ْiK$qn|X,x5[ku8J,^g˟6-==ri_vCSg;? Ҷ 1_qc 2>H8tw Yb9a cR(s, 4 ڨ"٨hg:&KZU0>ńgK4.Ϯӽowi)j'ħt^8.&E>%Ή8اx2~;yN~n:ekkF[z"7k˕"nr?:ՅC`ן\7x! _uq|We_!xö,_sq57vޫᨔ%zh ^KVzk7U{QapT $/!4dBSENF WqǬ. gQ G]+P C>nɮ\glz^w?HKUnگŴs O,fOqG pa;Q=\Ǜb~(6)5IUN?BB ?*-[\2B?o\k*{<h q5Z~Ec8MCr$Y>BB ;(=BB RH|EJ -+h q5ZZOuJ - ɗV ESuB:ţ M,M)ɾD~LV;pىFB Fj_Qbj -Wվ⠍1gq*\Q}9"0{J WIBel+$ϔn$G^NRZN#2o3VHhkz9Ne|huev<ܘOX$g4](yKb(CP( eN`$zEc %#{L_`~U*֋KXrvI֤ dNjFEiK1M$d XWXkȱKqW^A I;y]emu,ShfOv.eKݶT1eLLn=݈$͆$&ć$S +8eЂt~|me>BB ZޗBq8rܢ͙x_(i$R#x#F. $/7ت N v,K~(f1NћR-ЭU[zg0:upyaso+d~vz*@FQ% oq!DtL@Ys2Q4ƱIh\à$~\j\Jsh8`IńVڣ,GK#\͹r tϊVւ`*~Ӡiiy 8`ϯJqt}DYz8 =^V+t6"99S)t}z6 AH,ӆQ,J/S%=bs8÷t/0Z^<}kھ.t6 e<@ht!Jl͘FHJbQ ,w_{ w}[n[Z H*l93!qI8{bv -+#@VID ҥ5ZM>}\$$:h$(!HR]-U*RRPF\ˑ լuFa:G-A6 DG *b -3|O:)f'C#Ƣj%D:b5Z|5RYo-RZ1وEIxl-a)X<$Hh6Gp$;m (gN!CqǪHh^ZlݧGw?6]_-R;rJ(-v@)r @$E|W?q>oڽmOnDSd#,_@j8dۖ.@.܀Yfzyj $X,';3{V2%f Sel<{;ܴ􅅛LgS {.,*)(PɁTiI{Lr ݽlw/vݽlw/v lw/տe{^S/vݽlw/vݽlwEB޾)wuy>-.4y|tn=c|?fd]Z!Mfd>K-6<[~r}=^^1-"$Ub@)],EЄ IYe,8tF'ߡ V.,b?GQ_...ݜH JJ*#S"R@#`&g$g}sW$q`̳ l͠8Ţ7ct6YUojQ^\ɛy|Na[<?P(VzI{I^%){I^|(aZ?æ.tp V.1C9JZ66f2Hb jޒ%tjw^@z d%ZZ!:2TEeI䘘Цsf,eJ k9v&zW^Ap}swrd>ϓ@3}ssm;[1S11Y L -mNjR8uroL:ٷN?{Һ(`XlG..qoT/7>z@L"o&B"]TN͞Dʠ,"^^$D>o)k2Ni udgO1sMq~9[Iȟ1[10U7S&{Cr8{%Zh2*gF&Tk)&drɺng L` ʃ`H]V.R4"1O u@mBKkkgf'4\YP3=W rT 4@ I|YΚ5>M2qGo.ʼ `DY2!Je*7 .<$ά#IK/mg` |E&Rs٤([dS!'K-w9Խh(\p*Ч \<"do u;|,E219,~$d&99%p9jZA0;8 kTǽeH0$ "ԑhb٩D!|RJ24Gu67x&gS7CUiE$kᒎ@ I:U&0mI!4kL2h[jPR DO&"*YI:,dr(J^ZٵJAxˉY&hzK0VǢIȮaz%.Oc įl<.K- $ d=qN$rA \  Ɖ\\䂠! A.rA \`\  AmC.rA \  ς K#,e֑,eֱ&YNJXfˬcu,eֱ:*Y2Xfˬcu,eֱz_/Zk*9XcUrJUɿD `+6#ŠX+6cf،b0-aXVlƊX+6cf،%lGz21?} ]x`5"F3V+KP+׊gZBmud~4<)Tsl 鰉.2%HS}6_ Ъ4Ohd0ޥZQj-kR[j:(}Ӵ7/Qa:^ߘ.&إA,0-…PsQ3Gǽ_]ƣ*j\Q$1Hl2LxYiujʨhk T߽>}5Jx>% FN_W+z.+Y ]B{Mf)vvE5n4|9Z-F꯳1ͪ߿! xIU\H >X t6rA&> Q>twV=4҆,MNsAZ=DՂX[-El8,d> 3醖=t9Pn (eI S=}b <| 0Ug}@@`ؘZ"kR 2:ϋ칐Oܧ'vt&t.7vcC;WOTL LNhVYe޸g{=_?=lC5)jWd_|H-e)88H-9^{\(aԞR[h(d|I}) 5E%Q)Hc9-' F9ZHΔteS)R e' HJ-nM=:6XFJҿ 8g|1j0߼})wUvCJsڞ4͆2(lX ŴU΁bԬ@cjH'wv҅J^>xg njD\/{͕bLILKzz;#똱cY2Pu;vt-}G/J4%v}jׅUoxz۸鼜N<4pylgNit3 G3kX}Ϊ$nP}jlV_ꢗ.A-#ce+$`RfW44{-{-|]n}iM шa|)KXWc1}xVw&zrvTо;zmݹÛkV|k|ܤaee..](Z{8} 77Hn#zrF+j:7&&vLj^so75}^JIt?T/߾iX_eF2?7>Cljs?B}]nsm7Z3\o1˝|>IXV <2hl6 .Ӣk4+Ė0mS0+0?V# %\W2-%Ttzj3eo4M;4m&X(0j` !9 Ϫ}1Ez=&SGo^ :y9WrMx77ƨ,-'Ah2rˎᙷ,4̕r=KIWKB.2x}2l$*7P™:$c1En.(@Jb.WǴ.c\2(K#$LM22zIJOrƔR y#Ȫ ב*S AM,+(g#iQH=gL%D Ob:,L"0tX|gv:]}E (rP&\ ,j&Y~9ͪ æ勛ZgOL;anB,0)̷0zFXPuĂń^;@ho]9Rmni b a=.fϽ豦y\CwKo !r-&nU^S\ݦ&,Ԫx\pk%[%R`a)PkۋnN`tXa+o0X5AocQXCCz]VE཭j}ԦlDM>-r.ИŻ}оXO-j}CesלdjCTSf]Yq.E.1oTko|6-MIB9ZBy*{ed%2Yv 439ۡ\C '_Za4tj)uN?^yʭ04;,ox|j ;Un).%Ƽ[C`2*Ȭ_s)DjDS֊g&dBMsv;i0Ib\(q#mvYH|$D=R= 4_=s$;g 3 ˂Yf֖p0,K#OQ'N=Yog^ Ԭ]a&du:(R*sH"1je$!C ܣ H)C[CN?'% B "nLdd-$"d5; vyN %pEVoP+N1N8iՏBvj?Բc1-Ii9󫮐n}p4quq/A6z" Fu$8Xv*Ql:2@}"wB G/cfg{u<{+dNʹu%;Y@y{AӊH%# :f-u6DL`Z"Ci 8&d j[+"жIv+L(_o"鞕Oe!EQ k5x sm-*LVH%#^EO=`ϏE;z]K\=?.KC?v쵳G$o+&f6;mn$4@O$F6Q#- l9NT\'Y]m5r$KyqSH@AݗHQӄ @-Yt3VH; sN&b@h,3RIK2F"NAxgˤbbS>jKs$Qi_َJZǴĪ؍Ǔ r 98;-@*ĕÕH O$Vr ǀ(L*W /_u%5/< y{%AhtsFPD6r5D$Ϭ,>614>OOQY(k{. "]/=aQ,cA'K c9C/qQ5uQ$`$S0ށ2@XIZFIZ"ݻ"mM+P$29 1/ֽ&b$@Es1ɒA<\2ݷ2 Gr.=BZr[q# 刷Xb͔:3BAEBFuDE࿗Q#zge \tչޏ;8ar_>e0SJyEw= jbB 5׋f+_WQe+_ui y= TB< u!z<ťs.E%͂,BFy/J¨=auA{icwnt= =6}K)O弇T(z@z&}< йɏax | $f5j(\s]ZIMV<Ӻ1QIx WG_[A5K!a#I{ ShU^jRƻT+<\e @jKm\oՠ/;Lo9#)(G40.>jF7Cx+׼xTE/}5jB*`,h&+6}q\׍"P,d"Jl' \k_TF×b:[1{W/S!A'(&Uq:նN7^*vpt:#< eQ>twfb bw)iw\+s* Z'*r-֙I`^'q:)XO~נ.=H%%&.CtRI&*(4B t:g*EрJrk9bφM{{\mIW9LZhwe$ Ug9wy }!լr՚~BjbJ v5Л䏇M|f>`,wdašqu({9xz6h^"1I[.E!J!79:$LjfK%+Tr9+akMI큾.#C4] $ZmT(z,oP7D7{[83Хv B򭒝yzfNr& n3|1a}iY$Q- o-{g$Dg~MI}I+tU0 #UY8 [+hh"j/T9Q֕ .G E3ht8oݹsɌS-~/>m1x!]޹ bJF1SWahu͖J"ӣE'0ڿula5KAqQP= " VH3fɽi FhmqbRHrYn5#< .vvAN9CLX-SLqM#1("attKkgEC%DhAVLZ"$"MTR# NDRTB@4ekNG;Ym OaK]Le^yF:Дs5*z`(5yb9M ϬzN(+QQrF" Oῠ%C,# "zYV6&2O*NW?ҽقv5yzgro`=lά힫+Z0"PT aD#FB;'=S`JBǑq@Ѻi3$:c6W\~vPoJ!n+ wZ 2P5cCF;rDjmmPQV0‰4ST-ߊqD˦"l|rf|]2(qK ]-1g*7[Zka0SQxWYG>X Ȧ!ZlG\}L6NT+ɐf؎];<7f7NtGٵ4;mLD:2?P鰉O2=.ꟙFT*V(*hALӍrID9R<2"0u2xC,as?cx@$Q;2 (WKM)0p ByBI`*B4>HK0Ujg ,_UvsC s C ]*Lו֢q:Ra4:&uvlaQAaR.aFL̂dy_DkBކaM0?XWkD-XY1MMdz1t-x#:4&s0)'%ym*4u1$$<#0S}f vk@!iIx츑1ɋ!ܐvf+t.cbtҦ+HqĘv9Fj9,RYt7kwK\l 0e&Rx)164X)% AS Ko\&Y&/|D6,N]e<;3I_O9L~_~7Y{:!i?_bHx${;<\ď~ڄefBTc8H Ia*Z{7_zAxCZM4}UOwLrq54_gYtwDQ(~p?zZ&zKgfI>mF>t]j 0MRz1w,~{C;񎤀1~0~܀cSwS&_BDbz R( 9CF`Lj=g۹.X=J0O$|ҐDn`ޭ` "rK&2T&i[׻ykݚ] ']_jW=vs11>Hfgk{-?xKNrF.V4of.qϜUZ&.Zցm3x:Hng جh_OV}jꁖ>&OAa,0N珗KQ~5y|:Ovv O5oT/,*J3* * +1{ym-34Mrڶkrkэo=cHQ4@ M ÇǾ4\^ Y|1)Qѷj2 xyp?~P`^e#)>"YXN!J%9ej TF3΢CyL6tCu\~&*lZ^oUch|nbi1g;Ja8Q>F>q4|YT6"qvzyPsF?}[-HuH3T2c-̷eYν_\F]YJ :r}4H8jAŕyG]RTbb,V(U1xe23CZ-X8 b΢DThBI$4"=6zW3EYi89ssvI;s{M%5$t輾5Z"?m,7l:zꥳrOyP&Bj  )ag\>KU?2@"/aN(E+fB~Ze@X^ZW׻'eo,0\`ZWWڝ05yq/-r p2sߣ= ;ӄ&;$SzBWЌtu.r 2Vߵf֌)6zcp1fRFt3+.ժ::ύЀjt:FӉ R X+RABȎ4ẚ:} atb0T>'!N;1 ^0x*cHxz4O\Y.' tZ(%i &F*#I\Jgꤪ;$VKL'96С*.TvZ}uu^K)=Eǭ::.4)qB83v_5 ~*N'zNwvzZlvfa5 H9QtfH+ҝaReF'N1e.0_\/M!/!YI:P)nq]6æ=!U,L@*Ncʠʦelڷ*ݽڥװo`k.PHx3R6#,RFXp46~]Ò1i;ٌOgMP7 43=V*akyֺj &ξiFoJ-@2ԆY8@h'"cҬ}nUeDz2C9W[ځv.tuFOY},-fJDfzPE6z0xax`N() ( e5LY $0Yf?grgF՜7j}Iyu[4_bhzi$cɍ.ld*Ro)>=]tc֡&2Ff|,DQf̒{!Z #I)<"N0fjv,ٵ";021aL1u4Q(-Rc=K,- EX1iDF4QAJ(8bHAR aEFv9`fo)*<>-u1kSyqPezʕ_2뉹afoO? KqGIv1}G,ےXe+l[ X_&ƤJi*/ ;XL˘8Ʌ 5!?g;|b>cHJD,[(DΰW[Z`Eq&WdժFtf'-Dggӷ<]2(rR``Y:ŸgskLM >IH&"'Qermcu֪'[7l}Ɠ{7o;ćۉof[dMJ U{.&mƹP3/Ɍu&gpHwklr g)_:Ϋd+?+9ącw6X舔fNN Yas@Y?PʏHPIC( 2*OTbRҁ"9{F$%ZfRTV(cGB$.B:(BP(GBdɱYilfwtj~ڋɚe]$NK'4Y(8VSd ;%I,WrH×d^jRRuQi"ԉ%scZQn"!D/YKU`@o.,:x/sz*m1x6BH9YA&YAEHH7?O]&OIE ) MxK^q@<̢3J`QF5 =96Vj`: hB d.IVU Eb33d%4 LO I g薸zd,8GF)zeνIJ2L[T2)h4UgQM&?1G/.Π˭vs&R/yS]_kB՛ۥ#O:(f6 Poƣ죉.ZnGwoY넥/,,rU?gF2 Ź2.;[r=ƢCNqkr]94.NJ:NwΧL2PPb4Ζ稲ŷ^| j[mV>Ni8ʶqqgenL1ƴjL1ƴjLkLVQV՟n[V՟nYm 7l-ܰpn[alo-ܰ 7l-ܰ60pn [a 7l-ܰjipqt~>(Wl;cNxN#@qy&&q:/ótktGPi^Zi.ǣ0R[ۭԚgR9DKqd룉ˊ&@z# O/cn.yAuO?h'^/\5xNk%i4Y)P>ʱuqEmSOek`odOŬT~,D`.H;͙0:m$C&9WexP*, , 0\f7U lϏK;,TdʀB+/|fDW)J=pEe[; ,J~"jFotecjd[lcSΉ]❛v- Z&tTJ>z ȐN&%R:Z`EN U;gǀ]2: b m3x*#e= * mn+Z iwCs+fh%3*5!1YT-CM`2>Z$3g~1w3έlJ$ǙGGB A&A€ʀYr,q\ [&#gj;S}4|㼻UUs?iޙ/G"W9FǹvsEVҗa]*xCDf eIjyP6Ju Je\vνj+ơ&svEn5p~$ʃ@S$ۃ卢ci:|ʝ׷Rwg M;?DUlc eHч8rYJd.s>R86Aj8i] pҙ9T 8\fQe+wo <"i#"*c5x{Km7ջS"@IIl{Z*ZKyV=i<3׻r*O)K.[mʤJxL6gZzKjBarh]Ͳ ĺ:fgu&'^᣶~Vf(7D}L (&j.LlY8s̢5;a="e*DSIr *%SBFFa&]K |HY%٢UM45ƜiDPQ.ke2 rRH!Z@6FC!B 9lHkHkH'^4봺;_X8|$Mv_flyOejQfИ $S)^(g] u d$|AQqQ!j=&l3R޽CeE5Get>I$(4q xɕe:{+WU14A-J. 1'sHN G_9 Ylq#G{3@\'_2WP:z;)ڝZBẸۻ?#lo[ Ieqk6"H~b*rK 3$LZըVxeP.e#_{=GFδ$d!bpmU6 N.w6^}Vo<ɾ"2 xJkBsjo' ou" x4)qDT=(sܻQB@ͼ$3NG%YC[R(z\WMD >S|fm2D[yK˴p,XZoiZk#s/rY'M<dl]AQMB[g<p }?]{,|M>}fױwzÞ(=(~( @&@6 :+<45 9riO+xP:;uAe!mK+q MUuH/!]G*YR2d .ABtpLZFu혮kAU[ُ~.kYn\ͧm9YQՓBHzWӳa!xm4F_ 1d]ڠ~$~.p5џwF.hD7r<*2Iat_oр&'ó?@}:ޗjKZjȊuXH\ȓ`h5TV&ep.Ylފ}LR΍:Ao321NNs蒘Ӵfn:K̕v~c7WH 8 .FWi^˳^'ߵy{L5o(.=^ոd>_tK*dmNS{<D11ΫJm Nհge(|0 hwO^J=ޥQO;ISRr^LcÝKoKZ6*Y x>*Sa~y<uOwZ"ՀJB^m"722i?{WFϜ**cailr)wKJzMS2Iٲ.ΑSʯ{􂧁F7Vn}ZS 0"ee'^߷*ghe8[iB{7t1gKx q@9"H22E(jglAzg֊HWThK}:Ms@HgcAZY\D"FQ&]@7=CM(՝ n7m{[v(g\t.CuKBTϝ"sQ g#"hr>L5 ‰_9\[T~<9g& K2x=@Ĝ<(x`!i7ZQP#+"k.jUtX  ]=,q2uGY!QG6{Npy e.kgv~kGijijHD2`HHԧ+΢2@7!Z Z "uHKiԗ2A")EBG$r9,GB #H"I  q5*S0 x7,qƠd5LDMn%,xVLkY*_'=q`F9 ־ T`2΅H©I@e$gSG'"<F}F *r jv?BH3;b"[/5;@'C]*of=D*1#*H*잯DD vu턁3k}Ł@d/]V V:t"1MP )K$)P\4ɜDѿi_XKav[FQj@z7']~k'vm gՓ`U|jퟍxO8CP* b4It9#P%Z3+a8?| K,ţ7a쐕9 ?' Y9'H%jAЮc<ݢ @պ rQIETK}Km]x{^0ǹ3zŽ#*CpE)VD8:SNHN^Z8=w d53[^ 1jELнȆlqSqDG=#Ũl/w)G?~~wQLp9q&QrJR#8Qe0N8S*8QBQ'ɮ,BG(Bĥek9+' 4K&ܼBL l"B,YֳXw.#1h)vfhcvc3w#ߢEG@@"|㪒;ERtғA ,NQy( }Ef5i8Qt|Ʌt/x wt~[3$L wG/ƓQ īx*^LKaگ/);jQE nA<ƃz;Ǽ}f'|^%\Fb/,BSD]C1ʞ@R$Qgs]\ #gG5fYsY_=_~y ?`ʀ9=0"ALjM<(ÁDY.,Z#ՁHObw/_R\of?V?r=a눏cXCe6[[]^ti^=(!PvhricFxI.Rg:EKbLx-*AĖ6f'S7+u/ʝS֗S[UJ77_ %'R B.dz)fE.i^ɩdin.?m9,UUDto䭩J:wh+A"SGP qe8pEg* ߜJ|rUW 3dq N T<8Wʼn&rb>K>Ɨ.U= eKzat̑lPp( 4OLk2K L">dpڡ<ޭR6AstdwLm)['=/9}TKKMzrd/[Y?eP r`/sUY X߲m7wOǿGZӉw NA+G|NʒP4CHݬh5ѡ̸${aꗻSߥ>01JO+ qu^l:7bljw ]^jl=CFGO֩9P^(=A$6noGv ]NVOcL81?M+P+]gyW~sCwy:yJu2:-,c웱  M:ePeIl \{c!o_*{qY andm , f滼>Fm[}àd=:&QvȜ\vlkb]#y(D{)GoEeoBxlp\a΋߉ rPaG dQN%sѢpp jL*6ΐ$X0 Wv / ON\zWr}ch]62%.\9xP`9dM4G+lVMPJ3y 29#Ѹ1"JJ)s\ن^QrnbJT!œsG|kJ[򁧲gʅGIqByQjY%9=yZ hRE"d iHnrj⃦rݨ%U?%n`{`$I(ZfuRXB 8";+<`kPWFU jdF?ig\|>ܽqwr'dJMp4IZzt[Y2R"*0Q}]3l4E <.>h6N&N]ٷ]N&K:PXlNAoO?72n#بB*"P ePk|0R{q.Jb9qoeS NFm+#}{$2z~-WLDޫ|5`+%,-!woH1̠)$RhKD;G9؀e+юb^ЅCz;͗&UݤZ5 {;:}>N|HT뀄1x?yΖ f Z$ޮF?&wJ <S݅@dHR gy`NE=|:{8DU}N { \ oy;!(0:z.|BI <XC#w&@)!I/]D;eEMd+Ο:cjvc9ϋaW+r¾8my%7xKoDsA^B;|ΓRSۓ7+K5^Ҿ?@hc"1 wqKpb&hhF&J{hUiXRYﴠ0S e= )DL{k-*!~`]HR "ۨ-a[,:thJT5 $( Xʊ*g(n#J؉MQFM5MMޗ Y8qɓuMvs~O'Vb'?Lժe<_ 3o2TV4 ZJ.E|ŏx?_K괏դZ嫙z=`Z; gR &N &2#@'DLש ͨWۧÏ\Z_ tlvv}ϋeuVҎpyaPvȜIcFxw]Ҟ-U1%ɓ"{Gm=WOm Vn_8_:g!) ´nLqQjRtsT(aVK<T '_ݞ43 ^-H˧7&]䵓gvX2,LnO%)̌ղyJn\NAwv6-nLڈxh@yyy.iz6^D6»ڿ53)]6i1bjM30=fzisB(\*GV54+[6f[汌囙X[2?֬lmĶ@eShhc5F D\'vz!>is>;SvOٻFr$W6/ey,fz0O[S*-ubS%Yʹ..[J1S_df h X#/FQb= kIvNj$wNÆ?#8= :hG PZT+"2oE wjAѥ5r3-9t5t]jN|KօN9siLœ wikf? +U!5)fIyEX1x5]rZ".SXZJu;I]qj^nގHO;tKͺ9N J:n2;5eŽ;xomEŝtqĺ{hVګzW̺C).^ ue7Uwk;مy{܈ɿ8z~-"zC;VYY?pNp-f誊86gw36tUT0{t\Ɏw&7tC:͠:{DPauby64ЍY˵pAb"&ygLD\+FDgm[@ސW +ck^1Ri3єtP,&q2}ځWI~^“ u@D㢶 @8eՓx'!koÓeg7{^AytLpe+L)ЉRH44p 1&zl TAIE!I)9+?Ύ[Y=zA0kY!g'a6?sZ̷qke-[?_>*JIL÷/'qEXݦppqu|4mMW?~wQLp9AI$hy8)-*9e0N8S*8QBQ'jK48TPiRZUK@D jR<~gRXfmQ1xΒj=+L=ir6qDhu?N̞6Vюq^/ifI(H[/z\UrHIz2<^a1P)Y[K۴N~߼8Ī1x7gny9{moHi>etj68<{8|-.d;Mkb;73Jc|&`u,kWdA ha?4!6a18F*o9[ΪUYkSDj)׸T/R M\H.tpE$h$IO98W6h󉿐2k 4DW~~,x =\%PQ)4o&ПoVN ÛozFXq=gw*: h>'Q'oKhL;GB=vby\UTx?_<Tf EPd"^2΅HB&!I$xZĜS?SRed{¾Z-Ur2Hg$\xݗ-_fYP>Q?Q&?Q-}<^G(;VY20^e$ @[T"CthA PIQ^;g`7ȨcyAM]-;,ogAϵ0/5g::d*uelЖM`#s ir*4ɚh,^AZQ4M^jhyx 29#Ѹ-yb p`$ mw9 71%jyp}xjwoL0r(MVoc:=t櫣k逼(:'oƜhb!,E{dD";@)&%7+r!wO!.1.XO@:04haM%DT$HR9扉8$YgT%`騔!KBѸRGtkxa{3qVQ'5,g=hA5P=J!$Z;r(w%!7Pa+"بB*"Q ePk|0R{q.Jb9qoeS N*EDٞ=G 'zv++"* >9 ZRsi g(*8sm 3y :ɠ'Vxǚ5~VQLC^]g >.w;^Ҵ4KztaQW-;!p%dUPMTlhfQ\ڈTE=>XD; DtJp R ъPπ|5y(kg9&s1秼coʥ6^Ww%gQ7mhÂZE-T`~]D;eCM dhxL1U]ݿvu ~x&^~ڞӅ& Nm?ژ`CEۃF^B;|}j؎ytnG.k75?vkz.(Eb:x18o3 RHF%'sO{ }hDHѿ{ݟ}xv DFtT&5Ϳh>CV 5PDpJKeꋜmQp ޾Xh7a++?u8l|zMf+9p>LaMjBHΑZ)ClE*$X1$R*&Da}šLt(S(~[p谳3h e@h#9Y \*/IEMb`"|L+]8U!, wZP@)G` SGĵA/XRT6*kK EbbahJT5 $( X@ry QFI{  %0v hYVV%z@GBt`:B5QgmQM 35̾`MdN\sDք~7 }u }~VWvOa |t2Z?jFl[`OtG/-OA7?A:+|~𢡄o s;DdUwȧVfnK.`ߜg{ٰe777κx]) ]gsw<_gLp6XdhLJY6:Dn|\"s2+0+0Ϻ" ws`b.o/LTիIJSuђZ6 /xBm 1hvݒ WKC8?ܻٽ5aﱺՌCw-8) p1 U-}e떐%;3$W!.^98Pi(3F; 7w:"D>k;$qAB)BV;c+1ApqEX+"^Q-EZ#lS <6F  j|@E!2a.n.(CS0$@*˵5kcځi>sQ$IR.c!ČNԹ3i4X#5 ‰.x ]?zvmhjz/-z,3H{ׄzqKx\)# 0cMZފzu`XdOy)H9w X#cCjr&c>TKmEZ} ~>kM'6Ġdi.*`Gt@MoBmqb[֕1U(n86kFcP<+5GE\c IIOJ>Uߤ4 AsH_ii/*1obW1^F^e(6J^ߓM%H,Yceu7YQdWuF\s!4#m3CY, TJW:buOq=r4y:מxtpo]};oػZB.ޖ毛N՞jzWbk<)Wa}Nعkl<=R 3@7AYrUzmm3g{ GWf/Y+`%#2V~O/Wdo"?=oNƴE'q=o7"ewK}Fpog2W8&G&|6ƜCti"`ϚE͚EN}hϽ;Ϟ@ǃ=Yϗpjl\ cFw{O / /t}NоV⽫Zn$6a3Ki9ʎ<ŏƟ~6jli|`'y*ӳzp_,6(EZ_tq; dlujoz/N:sS/@.3O/qƨDƚaiS Fnљ,zMx?˿,N.^ԓprx4v:XkSTaȊ/r<GZpޒS[h,=a:lv4½؜"E*U{ ۂqEWdZ>1B)*X]]Dy|ؽ0,a& ݲͦn_Y9; 9˳l~ܣݭ}kjo!P2V[IMX-RI[-5룴YFm5۫~{'F~6>vV+濶\lv ǥ;DZY -TߒCƉ"iU߉dx|_ *dZʘ6E\L\OۨED1;[8Iz2ZsF+2TyᓗւYҩdLFTp%&u+2ѱ@ɺdZCs9/߼7.|?K咔R8ST%%Eՙ@UXh!`M"4=~ԡP,a1cRV"1e06:eZIPtFl>bgkᆌHf/eeqHhB"ݢlPs9Fid*)5L*ʿ)ާ\1XUciͻCs8s_ _[O .Xd! G7@ c~WQSʤq-[PCB'Tɐ"Y V?;qZn_7'C}UeDMXUb9":'*ZI`Ef-\ 2E%d;pӨc퓒Ys01DD6@95Ư#Z)|1!-8m$ĕb=I+}1뜵7e%UVش+!KFL.9,6` `֑B6=V \.%ɆPaJD~.ːVAO`P) ָ" ILv+ **T(:Y-PZ CshRN+OݬiDM4CM< EcȓX[M`s  qV9q~e(YnXJKIx@E >{r~ZLttD&r E1x7g ㄺX=FE ] * *V$0<*f؄گgz|0gs\w}eR{HU%4Id1͐jPo]T?ȠW(c2oɣ9R+ŚE.0 ֎BC\0]E0TrI!1xSp ݛ,3Ф~ζZ *)>æb8 Ere7,\ RYGhl N* yj SFweZsO.YUuAdE̾o:Kէ Qki D$$,Vs! e9  aΡ ,|:omBi!魣pCn(&("f1DeK;,')aP_{]`,rv}-Ѕ3mw%$I-GKw20:xfgt`̬B'䅵!Q$RDO+We2OԊ.Lkh:{*̋EK*BzP:G͐ B@8 ͪd!*T?<{UrD5ʡ m+dl$Sa|y]^7lAO֝GUDr>e[}%+tm m,"{ԥt1>Ŭ>&J,|ʗ 7L-F L)ڽ&XRr.A 6#N:G!v,H =%D/! ϔ]IPS.|Y5#E4ŒV I+]KyWN ЋHߔ,"Ecn:XtԙH@̔hȢo&CdC@b{cሌ%Uja"X8ga'D D%#@ DELPk.OzM:5Mh%Qr[6*wԿz+t 2Yn$,BPj/Vp4]&CTj4Eumw/6KׇC}}].E͵ݙD ( BAQSLCGs iCk0uۿ ~w9,fgMߵFj9œGբFho;g(bOgʷUfU(=x7,JxKtlTbH~(CG؍Z. &Y%\uFAuPX&R 4*.M(z DzeKp}vnWc_ 'C+ MJS!& |rUpN9 -M^2`Z071o(([#HmR5HȦ9A]Y:H9E @>CPRIB qtb鹛ՂХQk\]~DZӿ];Su)Xo,0H5嶗z _嵸ip2jX-\(e[g+b+b+b+b+b+b+b+b+b+b+b+p%p1W:pp}45WW"pQ$W \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1p<& [x+>A~WFx>™þ>죩w(t 0_= 4_uM=X<&c'ٜ!fB*zdZn> A\@gh"@%Zm\ UmG5.7gk/1V׋Yߜ)25cfm|XO {Ǿ ^2^ kwuꜼPo{hz #PzKV[zoxeUӾX1O>#׳/cwd~}xrmo6Ci4_wUӯI4m0Y!Q CS}gʨ䪿G o9azbh~Ƹ:P_hQW@sdOEH攐U F2GfޫH5t^ɋ_ &] *+cz3H M Z4biCOc+YsG%)waAk<މ׀!ǧyu=O>_$7gG]aNޏޱ׏}% C zma팛<:{SoO_,gonV G8ޏqmoO/~VbsDžMMf'%v A*#[|ukkIcqfX6F ,ލpcfGǴXew6蘥JL3i= ޥ]yo#Gv* {T1$ͮwvX)C2Ii=u7OIJl[uկ{Unka/!.I9B)6K`BmrK,՞6\fWXX)B*D^ uVFH@{Ɣړϓ0X?. #R^| x^ޖǻ7>~Y~zu#Jp"]h"MFP RY.%It |8^$F$DCyMd}'m?%'dA(ō̙`9a!u/YMbNj;<"GA֠V:?[~\&/*5X*2<ըWӿY a+{?XƐlTPGe%ʆ#8].-G+cw20iޘb=O<;1ߪvgR$x7kG$k\- :fUVu>OX8`yi$'#k'u_"O>ݲZ`k,E,-c?M xˉ iYp7ham;hO<`?hC*e0_& kpKn吢2Gҷntf˷qlW_KؓyG-Z^&UCX0er|)99Zy99J0 Xm[;th`? @ˍuu#lk0؞Y{Km;)=Rc Iczݻ2Z52`z6V;B>gOkP3g%IK,nNG!{EL 9C9} TIpƌ96Mi{Q7n`SyUQ}N2C GP1nݛ^o@Rg׽W_~>D\F/99S֖o~wpc6I5lmguibj[>U? ,hX&/V;wg,VlYw`DRIQ͓ ,;ME6daӲN:I;h ;P̃2Xnإ+,/?P@p(ƣ4' ЎT xs).j&c_2#:"E(&VϜjX*Sfs%gx&q.Z(z~ƌ}}*&aܿmп$xY3m܌]C_Oņ~>9 ̒.䦓E*(t$҈Vah$gJ:RTg kCnw. 𙆠m#azf+!E FlyEp`C gx_&\\ f'5Ϻ<`uBBd/zavДZԣLj۰w/"Ȗm:ed>iI˭mwne6Oa"Mb{[DJ,gH?B pP-+ݳðdǜc3kl3G^R8 }j P=j7{Oc~[ψo9Ѽ.TZ2ָ6Ijlh7e^6P^t'ޖ6NZ uT<8-3V5g ~ۃװ'?$b޷żż3m ;Ze2vL0pKb0m҉`UpyV ? ŏRWy) Iq-DziQ+s:espxkTiGńv^dmdQ)yN?\J8.pRo^gmmTe202^L+Mxf-wim9κ:ƕs+HH[~}5_r{kg5,af#AAG}f bϢr(q20"0ވBH=7 fs̔^{g(# ;nVh j-w&S pI *1M(oc%BYLMq4hHi\&.%\h `S%V_4GKhUc[3dMϩv^ʌΣ@*e z6x"MkQRch^d |́)/_uPٚٞyU >В aZ[ Nd+hJ5HDrhx: Z.Y,X20U(WXaKxMrX; Ȕx',e>d)QZAyS\aaL>DTZ  yLڽa.`5NK!:EƲܤ3&IΉP=nx6#6ꗔ(I Ep(9 qluDXb͔:3BAEBFuDEVǰocXʦ֛+=ɦ;7ZT`ņ{R?n+uejgsIC[S3Ήw$j$h[Ҵ-kͤB8 b.0Bet.s*#?ۇ\/]9n %'G3I  c!TPrx;=4o76&:Ϯ]nvHDHK1eѐKxշR. lH3W(J%+c[‰sg '4F%,di'$)h2#Fx,4̕r .x97@'G'ZLe̕"9Ms1C'n65IfxVt*9$'?[%R`a)Ҁ|ꄷki7> *eCjls#O˩s (i+pntT E砲.O$?+]R3)@|Uv)7.)ph[y0tr7N-W}wvf[\ 2Oz)gE\Yy[ *.VzI<7poCWVʊ<'73G`JPksP]*?mi{o;˶/^>3'} _{jʣ(y7=xTܝ

\Kl?fm*h! /X95;%i3YK~^46sZj KDnB,OtWd9X*7Y,ZG]޵4#ew$îsCw6[!-fy^̾֩>:T4DLf|Ykr*;<PaSC]U.Lfpj{n57ΖjR[m4rgԺ$c!哚_@3T$JtFa\xwl׫5j/pYZ\OZ m }s|G=ZwYˢCtS [MM퐯g?W!DdsZn&1Y ~z WZrV9Afx_FD{`"CGJFQ7DfםBbNMPŤ:I2vй7 .[0`U5SX~".~j,jh@&UKW)Ea aUΠFH oGߣ5^Ƴ4c>ϡUa_C+* Bi+Ti Cmݴ?c:~|4Lt\âMqqn[$I5oGzT$Xѯ}?VM1*Ocbu?\~l"=H/ i 5,2{Hm%v.uXFoG-p#e w?m&)@FI[:]}j)61,2q5N}6mn~Pqe\ځZoT TV0Ĥ%UqS$uLE꽍6.DBcO !bchOD8(:+m^̓oGnNa#>Js߱|f}Zr 7H= s< gr,ߑ<<8 .uɹ\3g@Nҳ;|>܌ly4`\ʙ 7[du0+$DB"ZHD T!-|UD/$DB"ZHD h!-$Dѳ9HDs8%|?zZ0ah~\~3q{۞xÛ4j9${eDASTД{2t k *$ĵ\)$ĵKtB7$aL`\Bdk1%7́aE`b![YcI"mCbbcbG.-i΍_N[?aAHSM|HT,J% (F`f"oy2J +$Ś)K粭X>%d}.YK)Q-U2̨%3jɌZ2̨:j# ͉AʉS#K}yQo}67<WҕVWpb#VUWxA ?0P5;tc\WWZ:m[LU߾\q}y :uq jL)+#ot>yxнJ^mT5X)%A7XhwrښX FpcfiC'x4{IˢKnWN<%(>J "B U()1k}HP }]ZwT`B_TN-8ַ^0BXz_R=CȤzDcy{ NЁtOߡ#]:*w35"N dԛ_oۃ8r-FbmzFНIGuڇzv~]Ƥs::7>o*~9ÉNj,*9h@+'>yFrѿ-ϛfd#6T*O{ف)#,VY6RqKgݲ&X`T9F$K-+냉KM-a$EV 1[ٚ30WWI Iu7]ۈݮ;~y=]|en2k݀:q}p``&H{(Biq%)V[i diNca0Z]m 4wFm6*GT KIwUtdFp#xp«b3.|BO$+ʾÑT:?&h_TTJ-@T1B]e(@TX'̎#CzTGyT<*ɣ;gi2ձ$'O6_%Lt3L V,xs`XdbFmx0xa:y1\^0PҠmPVJ$&-g!)f65DLk YHHv eC)Sjإ}a,1k5}5l.C}+vGU/Y}j}W qX0 ^ iF ᜶Ϩ <" x4CAn6Vm꾗KD(wadb2mbh0YQDi[ {8 -/ JB X"%DF4QAJ(,$rŐMq9qV Y-ӫ] KC `c2<a#\hʹ@c90P{%yLm@Nܝc u1"R%g$!)DYV0K1ˆ],V6S-&O*~ ߤ_b"#a-b E%8_ $09 O.`5) Gc7{.8E&#mH;ُb?GTEDLcP5cCF;rDjmmPp(N+CD)*h˷#m* S'3oyt?d#x*% Y -EZkKQ#QxWYoG>xMo :ISѡ2CMyRL.z{EPTm{ IAlwHJmHe@B 69RJ|^gDN@ zj.BZ٩V ]_!:Q`.8MR._fN:_]:&gŵ r B($<31CWL#,88cJ lx׻)G{ː}Mg}Qs6no=ökgZLR$%/Ne21-̤y5}ƿd0Aʪ *RAi" 1Z"iW|Z]kn^jVAfis̾[ԣ5QY5ŗ)"Ӄa.hˠχsIۋ;חcr<@`B<xX݃ S띱Vc&ye`chnE 8̑ 1^8|.2iay:AQ~8!,$S{ ƙr9S6z3w gh}%]pyBMKHa<_aM(֢+b N9ťoW12#XP1g& `(Z(mq:buMܵ~~S2x(k@WȠ|4 j-$ TH'K`-"#a:0- 9raY}A\H☁ 30e V] 1݁6Hcr KX ˜V!!sbKFb)DXҠs0I̥ش,H+H+H;$a*>"|A7*O~3Mbr}&GOwZL꒪o9J<Њ+EģZSNݧ Nl:iܤL҂Ȼ٣ӗ|[p2yA>3<==_gm0/g2 an6Y,$49Zc^h{ksq.iٞsyY[ n nN7cd$_ N6EYNi/+6ܶٞW5ЄM _ˆΟ!~A ? X?ѢV]{|Xp=] NԠԆ-cx:-/=ACʠ޶2oD5I=N/Zu-;yM5e:)Sv:4=,T6P^5/'o {MCUF* 3[(P {,[DkmH_e.ycrApIn]F?m2 ݯzMME $Ns~议!R+" ^+MBhڑ&[|=wDz=MGsι_J5@Z+NBLyB.S)^U˻* _ayw$enʻ 5~FoތE83$dDlhs!t:u+5uIΚo~~N?87Ss~L]>٤g{#̍=?<+lT;K&J}e_G9.%4M<%]vLK X'hdQ0yBBb OKW2io&n" _DzZGGwhFsE 53P34CS}+I94fcxr1ig"P:mԋb^ب6jv0Z.| z ]^Drm _7-mt;7Lf7؀<✼5th$n^| ,[:\<-7_AJ%r_> Ïee)wY|:<#y:߳$rZɟng|lQҚ:Dp2VkԩXʣw SWhuh6ۨ,drgrqP{ ggψn陻ttO{Gʟ@Һvkwvݡkwv[Lݡkwvݡkwvjvݡkwjekw花"""vݡkwHw$NBc,4{N^[ ʡ />HfSPNy}]d܂}nC< 5i[UF. ĐB%PLPd ON2*2Z#IB$ryB˘=:#se`GtQcWMz jݾq/LY.Bqr祯kk2#_lf:򝽼F(.n4hB.&{%,rJ`=Q$IfZ&EV WqC*q.|~rī]_`vgoz*'zmN̒.9J0Y#!Z42\WqBg3)C7W=}yۭ)YE#Mzj;{/5|}oqI+2oAff္YWՓAq0T1,lֈ֫SL}D}Im AdOֶW0=t{Fd^I˭,WMwngMniɍmWG쯱ŴFGGl(xc&$L|,oqA̜,4l8ߛr[vuu!z_z_3mhW񠺀EyQ?pW>F5&9:dI6"Lfۃbu )FXZY7s`r6KLc:֛ll{>=MG[QߝV)OSkRAecX|gO]d܂w[7&5jOSc@F&t[W q1D),J\9ͤq2U>iḁ̇8(P4 !Iĥd|Ԗh!b@"gÜ؅6=j&Gig$Б=W)t6$>m K4cƐؑKeJ0)bzm~+fhtOV \}0t QьiBDr2+! 4 s  4,b{c&oAx .X3z13K cYR@t(-2"ஊaa?iHԒ2d$IdW,Ҵ{G`".C$I%HW)n0v&LG&̐syb:hF:o#DF KDb%x{BW0=4IYRMGXxQ(%ˁXb$͌9f4V`gRn.퟿?l7j/0QioD,ᢣL Sig? f٣ }ըG"-4!] ׬;5GtL^Ѽ][ֆ Z!oYZs[\Q2poI+EM%nŷu.6\::*ay1@9O9(ڡDC0Z陦y?;z]QHĄ-(ZdNRsc\"v&ŧm5^asE?zL]i>61ӂոdOpMVh we2]6Ǽlx_ل Z-&vzc[3{+OWE& qC+j0cōboL*G[]]\d2Zw'hݛmvܹSdt~esp2hdJ;)RrP3AHɸa]4u`;2Ș cBkkAOkFZ=|ᵣ?޿4] `ns(GUpeht|׷] t|g~.\.UEЕgl%h FsT ZpV'Xp xܐ5%㌠1Ɉqu:鳐;d}YW)T.e%dnjkisV2\q_zY%H!g-gdVer4 |)cB̭uƴ [k)R/I ƫ//%*B>tz| C9OyOJ*AYBJ;+uC^~+ nzQzd 3|yGykO2k߽ʵ8V@ޑ1w-/ٳf>=#</M}"Kjag\ׯ]h4K)WBY3qkm#WEt%`Y g09oDzHrgq[WEImf)"*_(J_B5Q40Q63tjɏ/6 3حm -,%  {c!fQŘS$Ės %,N/aFfŞ;q`B̙[a6X+熌SreVe*X%5vRv( >Ps9N1D.驕=B[])}&#Zy׋~q Ů9[m>^}T8 ? @)LB"WZA]gtudUv9ub +yg/w*C0!Jt[X1c2b=6&oͭ+QGSYϡvt[ۑIYI㇧U yxUswne/Qw=ᖍzx ՜i[dv9ÍV߰a.(+/+Y0Pդ\xm .;MVìΜu7+{IEIU *EV3hY)JMܿ=SLyN^Y'!r}}V2voYOqQ{X#Ehp=K(ni:Vu:~SHIST $:=-- a\¤`ɐp%7iQh 5(p2'&&&4hmhdF c! BjCB(,J% (F\'Z}aqU Ak|/^i~a7\Hd~3W(YT=c)U`@0rԪ֧xf*2;ge:_(^0/SǍ o%yN5`w0\W4+C:Rb&y$ITbViǬQ ` | \K뭗Gg\6qZ^ZoHʁ}弩m]?dFzBôs>¦Mi Th|hG`ckV;xAsؔœ9؊Wt>(qƶ=qsב @rHoQHK`, Lw:!ܡX*T" PUsܝS~kD-ʚI"ֳQ=x#:4&s0ӫe4rkkjcHHx.92F``O `ڭc&"%Fr$ t]F_8퍩C"1 l 1e=t)P?H@M"'X"1ۘnGL<{`=?hLOW I6J=lN鈱2O)AHH(Ƹ^z+w*&N0f5К#զ@hc)K3|B;kNoM+j zufwIÞ:Rj f(H&p9&År[iT˗-C(c^C1T)b+48NGjd>Pz\Tknd_h2kS e,6'n`IƏC_6 0̠] C @>߆A.WAF4>M۪TnЮ<ύq_ɩ2֪yۓBVm.0F˓!IiuBº](  -,%  l҉Ӊ҉1^qkFEQ9EKl9w@ ,aqdaiBr ɸ[y(|"ARS?1 <2A=9#qZbϹp! kZ@o9 keJo[~i) mγGk8CAETa$XEEr;O8O^St6H ,ZSg{LY3%N$ A+dh+tBw)^HlGia9wlv|Mu y> ȥ )zZ@<9JJW FL _ˑ-W5q+_>(>-`Jgn7EU̓s2Rj^ק/G*UU'oGO| ߙ=NF> >>=VN -hyeQq5%N˧!&Xv޸:nh7/ qIE#PhAK9O Y42# HjFqϑ F* t=H҂[⨧<t2& s0(D"Kfn/fanS?Nip}p) |sEH; *s\IʬՖ(&A)c $p"h@QAsglrD`p'Mg;dzt{Wh)ʐ}\(o?UGA?\ϑDEf FqăV{{dM'k?YEH8{%CYFHDbl`sN Ap,H S D佖豉6* iȌ;sp6MjJ u*fB3It>k k[tr,C*Lft]?>k>ǭCqCȝUI$ 2NM? 3BE:#h;|DwlW5z^6;yn}H_sCkoi+/<˩w?)[(yPϛ-*Ֆ5n աtW<7oNRaY;j<f ㅋ|wUPs]Atׁ<7U"IPBkFKY4DFHԆi(.1CQ{0rHRHMLc1>"'^qfyrٞQ}oSGƋw^WT@Xc<2VIϩ bQtG. 1g*U_5#s%KR`#U`80ub#NK%c-'C#XFm>u ͂Ԋ#2\*밊ƥ "2\P>B6xјI&rX3T>/:9ĸQpkgQǼ@0cKT`Z(XhA;r6%KƿI.ak:ת=ԇ/$D}(Z4DAVD2ܒjJ7L\f.p\7*a"Voa`_4 .Ld~)y>BpbtjPXWS5AIZ+IJB+J79˳+K%R(iZc0ȰcgmI ZrV8Afx;m)mȾ^ /%tI{.o~)2'ٞa+ӗOS AEu lI6,cF1 )` dK\IJ"I,JL?{׶GdE.s`ɠ~n fqlM˒Z%moJKJ*RuѶ̬Hf 2. KI-U<#$Skc1X HZ%r lg< SN(j$]=s ]38ooܓU//6<^(h|.uQ|߶gVG'ljh{ `'e{B`}{Ki=nO y\}w4~4Ӓ>KY&upZs9DniiFCvssvm w丵^.o!t?|w3;p9f}d+)B⣳77h]ٷ篿b_0hQ׹XYUdZeGy:p8=vZMOv1-F֥cT#Cf)Ł,APJ)ef;*/ǭ3KAez>Zp>_>)~Wz6U1&#HVa9Ǥ^f1!bl,`ѐ;\m<O8{]T,dC>QL.6i^ILm: "%Mqq& = T+0F6Z!DjlsLuODg&`6WԳ)ޗJ I21R $\\IT-&+&T/0MHlWz ;c8ɛ*\p lS"QP1 /X&W&5ʗ'4I[,s|z|WWQĒ i:_dj ǜC*| JV֚ (cV/?oh6VnݗߴDeKzt%]<iM֢\3VDZ yJTHpK{7!Yz;s$;4[Q>\Tc1"O_|׽YȰ{i<pdVz뻎Fuz[ ;XLx'x2v j1<<2:G\0kսa~M~!9' SɎMRJ2Q$tq%ɤKaߓLLI&?` 잭b>^ . Q%mݿ!εJ &ɥ84dR&B)eTaZX[iG:Y5E\/X$B nM)N1TiL+5hZ]`t3PTtZB~~u~_jC] rՃ5f3ݑg54:ջj|խVʂZɼBf ijWp&נ֒ z^dX>:>Wk_yW}{1/^Aw :W䣤#G{wuzgy5IϮ[rzl:}'{ɝ!>h335dw@swrjkIJ"U͔Ce -ID.o8F㣿f57Hޟz>MBL`%sU:5٬DL@biٛ`ݴO-<FATr 2J pX⬱{j{yi *ȫ};zz ;uolv)DgfEϡR2}53`I5lZ5PHb.1+oBך躑?nV#/NW˘}:ANN<9~!=ب7Q,Lɢv}Ȟl,)6X)k396;فi}AkɉRLfATQuUI$?5o&M{A @JP#|,l@J%eoYK%1V,NH6!9v{PHx<&jѽ_mFpbX:z Wqwi;ca.) ֆ2W]`\uq`rTZ4{oN4Wǯ%\EutRܛ_I.PYYGƽq姣ťEGSs`~7vҧD.01]Ғ n&3#if3 qs NDۿJ<>oՖat4\+u+ yj4f­k;D}-R׷; *eTͯ{tb4\^M&(n5u~<̀i ~RN1J+,!Noy*@޼XjBS يђ%`L< ̇timwKIӏACp0JFc\uiwsեi[4W.umձ9sRuq?o?Yu 94p:R~i઼f!!]} ^y |}z/Wʥፇt]J{۽g_;{gis}*$J䞘 !K"V&r՘lw-sN`[67o7BjX<GMyTldAjN=6@³#j.^_h@ItrލQ&8`q 1iHsi]|SM -UjR9|F}r7~xh@tN$Fe&)'mK8\Msw sX<:I(Ums)c#j וS _"ĦG XDIL9%=.ˀEoE[Is[n|[:F=e5?tj~CrSp+ds)#.r)\ )cB5)Z) ʩ/S Z=gl7t`+^˭;t](V}mRjh6aT٥qjUYKT_z`'fݙiMjmC_z jÖ) oa *y"Jz:@qj/^</B}88;9z 4$R*$N.[|kC`Sq9LSokjZH\%:Ix<&KHړ(T!4cV ;u i3M`#0Q3U)+ 9'۲Īb;0ZU5iΪ?5  Y_{/=/R,6_Nc 5{x&rŕP⍅߫ ^%{ϷΎw,᳅ҳf:@Sˋjay[n>u},G|CheS! 3@6 Gqhj ?eJ`ɔ]DI8,W h9nlLb:K.M%l'B|L -iz)=:|Glv{`CB+cT+ڄlmmFcssР$rÔRM { .HA k.Bm@ =9uZJ3Jn8%mB ypy|AJm~šXzjDZs=1\w{t,n"woEo.~|Cy,}Z@:,Y0jR"gs>3[ 䮧) ^WO3[ѣkGv'c[,{s7M9-KI-x{ɦr3  = 7j٘ɀv2 x2 x \bC UG X2`K9XIm\C_F)_t|Q- Ol{lujlUͼ}*={X#U|LsGY.}3wKaϾSZO;)Rܫgе˟-ʟ=stqtw:WkD_9^D?]Fm9D4)$U2\&oyGev%rPkS8DRJ,>YIFz!)ƓXЍ-[ 4NJ"`{U+p1>i&-&Κy'ISͧ/R^x}~qJo{^^z}#t~s5K%t] ~ޞ @|(hLQ1gTRxf$*H!=zWZ FT$]R8[nU k]( y㋳˶*kZz>ŢjbZBXڹ]\ξ1kY~DBIJLP3T6P)AYΨN$*%p9;CRw==ZK"T( ^`R/ ]!Y?2M5=Džm~ ;KV8=\tExu~,wmHWz%xk3l12`@2H[S.#u[̿o0uRJ.Ur&E1q"NypHg!K3{>ۻy'Z.=ȽCMUo|!B.~i7 'SZ_Z7tnsb8ͶCjPf)wozΗ`Gr<篻:|̓q\)Gt]kS{ZWM-c<6/w 7ww7|vh] ϭͭJbCf=Pw@^jbFU,$pøb KB5kCl_}s5ԝMh6YQ1k$! I˔A$F.,q^tmȲCEf)'E9' xD2d-S8Ceڗɸjň;P0P!g7dL )G hI:W cKe~wT9lk]WwbP]r g\*@%`S<5McQ !4Y$рd8 U yvyu{xZMV=XM';W QWTLjJvJZJvтz2Jekd'`?3aOiY&gfM 5 dl$r!D]D,/5(}$.1Kgȁ]IB/"2خ<+9|k:rzAI3VDL)zP w$ه481:%JVDhlD|*uYgC_o"r*L[s9&]_?.1,GRd9mGXJ.Cٙl䮔S"0kf/=P;儳ogdhY08Ixuh7x4%J1$tMŖt}:y,m_?m yC3qu& 9?۱`1K[&UFpj}Z$?'EE8tLN0 +2GgLlwf%lܒNӁkEv%5&ѬXK^gCADPA ma@eHYr0LI:s 2qLGfA^5a^8t|8W}'J~u2Owr~lZo.~ s. ]VqSq3Hܴ&espܡ1R(ZyA(  a"{>^!^ Lv9O3a=W-jf{jy&Ӭr8lwEdV`V`^[uEh_X/ dacWfCzI6R2OdOֶW0l߬f8ZnHZnk j]]+B+@^yKŽgDKspl2;6{ҕMǟm3 .Yۗ^>ɯ~+4c_slA[ڇ?J1izp/8?7Z';UFzHIsE;2$|ˈ3)gQ~4u¯F'ee$3]hcBJC&{l8*<,&y=-?CQc2AnXjeR_[*1ZtmN V/juwHDf:-/~hO}yQ&7ۧ:<"i}z7Dzy:ZxgzgPf̠TVjfP A5áf8̠TjBe A53f̠ T3jfP A̠T3jfP fUU3jfP A53f̠T3^mfCW/jBS<?4$WWi|A'bq2h0.FJN7cn05~GQo'D3KO)HO|0 YsăZ#1<ː _ !i)!f)"o6Ln?O:Vw$?~lэN}-fz. xojmVLBL˂lLLv~|7/6k K6p3Q=P,4C$b%DXha~YW)s(ֆ|RYE4ڱo 9'ү]1 83Zdce;dJ8T )m*luT,6fhd{y^B\!["~pEG݂GO[o Z6\87E}VW_EE\ PDzlȀtsV;`j\\j,wT>IPuq^tw^%fzm"JQ*+1ܧą`(ɏcH[ q1ukhBx֦o{e$ؐIOcm\^Nwt_nzZ謻\ÜAAp\=&eG)B*J<9>skI8Rr+ysd7d.4~Vٖ^ѸȻ,5F@H%scH~![ytpnˬ!Ї]A xl*MҢjLjc2k6yB,ؠsV!p%D _2?:U_4$QE  MxRJC+O`"p> V7 R?&xw;gLdsbE+zؖKz,Y.ɲ\ F 2F}Jar,I/&Qrr[ق~\TyMRJ*>q^J2>]mIS^@)/'džKvj;/G?&_~Q h:RArRNs& }JV8:,84*CbNcvI(LcLO@V"!SJhj> wD@D D͔r 7He%@ 7MR(# ^Vav/G{cgtLC|73sD]M.kPyh("$5 (JVS-pJr_|a\4xvnt=':uѯ;6,nXҰAK4g*Lȝ \ڂ,Kۍpq}{88.GWt]?f4mw^s~|qIχ76mjw{ba㪇N,|ToLwc4[vPKl6-MwN RR)b gs;3g;ФOԧ *ݜtm:LRt+S^W&s6^YKPV_GqIGr iAZNEm|;tOi6w[JVx}mddq++F۠'=UXp秠oǓ?KO/:R8tHV0^G(d4󁡹kD ~Y&ȶǣϊO19x(mFHE+pU0 XgI߼LWCZMOt:ҹzc^M_W5cr1fy021[/KsgR\Kǹ3sʛumi]wjdR劦[M?OÜ']йgӫf(3(^HQ&h6L؞lsH5|6># J<*i2@ Lx(VWcXxcMЬ  d}Pܦ@.@yxdRlȲ(^#;߱]/..oafUfWcNׯ~Mׯ~pW<;1\#׋2iGR f>+WJ4^+x%DjrD+x%DhWJ4^'5^+x%Dhx%DhWJ4^\÷L6oSŠV [v+nڭPjBV}01RWzTV+ۦA=u[Ɓw[ۃq/R.Oc tAV?9+6ge&З\~|;vOQ7\_Fk~qҏyLMJ.%mKrEfvY|3sEⵣm8ùo,y[oy]g8 Idɋ`@ ,Ŝ^2pgcf&BՍ~Y&ȶϊO19xHnJ"^Eo Py8[l*Ԯr'e^uσn'n[c[Wϗg\-Ap1wɫr\|S/ ~fO?pEƎU=FCRdYY׈1K.z]3w$I_A|Ժ+hΗN|>~ȢS Zn*)"2"~R1H*TM(JKb얌Qi3JBNBNzYxPY(mvϳ["F7 ?Ppux=]Brd\@ ifCNv,ѸT^3A%Ĉ6`d1t l ^%LTÒz4P@󨐙B$HudKD Ȅxr&ZPQY[Þ_<1)!RB(S<(`)3 k2Jx#gU|*c-zN9퐜vvX'kJ>/+77𘹫#*9Q8 80Czy=E=zQo77@L|H  "AJ(}`c:ӽCuM+'"1ATypk^u "!A@!H^{bin,3촰H]Ou42QǸָDȮLis`$SaJ1rvF.w>{a}ٗ*[ZN1aoskhL*-mG7i.M['6]C/{ΐ.2̊,N U!1438JIarQ1xΒj=+7Y hL[|?E:̏kIdϬ#?&3C;Exj`ˤF-Wiϗ3{tfYW'f)C=O<ˏS˖W{P;Y"Y*4lv āyAd!m[Benm>~Y/$֎{Iom!sLq^F}PԨѻF|٤8 Qh@֫h(sA~0C>N.R\/lOc䉛rۻI|%/yR_Zq|QS& G򷠚(/k>u#rk_}'x%x!ET~Yxo6Ccs>_xAeC#iRϵb[b;7&\~//y%n6jvǣ-5} r+i2pf č~D2OmĴ(GW;Wqu5ڹݶO0DœWFnxwwj{~6?/lzρkzKrk.op{}[7Va v-6?mNC^Icpe%)Ĩ5r:A 1gL.Fnc{t;^ *Ip+RrqJt u=R "}$V(ǺMRQGwʕ;919dt( eK|"Y[5]ևW-q6AP=zBX%(!lrщБfF3A5'ΦG&s⑩*J}wXŃMጹG:b~OWI_Wi"іW3[IJŸz[!( a> p TpNX1\MPASVoUq_1xi*gʋK἖/6(N}(H[UrH: !@X }T: 6[]}l3ëC]] 1PC ?9qLn#}FӋȼanQp9J_ X1D UW%TF&^`Sbi'5v\RX" UFc8Ɂ2eHQ҆荊B#dIGSe{߂(G-Aq[u*)y]} _/j%M縭P)! AZ*1¢֮75un."7^*+se]CUx9@SDOEKy pcB4Q˕Q|vA[ͭ K\ c^xd څhp@@LVHu mm"0,FΞg}u;ʅgvr}tdwb-ݛVqt]Ǭ@ݢxNh R,&4}֜ 88gts!h@Vր􆊄 Km`u4[mMr=)ZcKn't R8 q.Ζzq[6żxa][',$bI`JĒfr:XL-|,ir%ⵏ%}?\NI\e?qɕ\UV鮋Lj2k/ލØag$`lU&WsWZytTWP\IBQ^+`و+$W13.2GWQ\)3WHYX|1_Qlw>".2TJHUGun3xO&&]N )#gϔ3kʵܳ1? 7׷[KkW7>[~"hai) <1T <0 ;m/D-_e ]9+jML"I1*< `ӽ/ ]%:'le+uؕ tKISLfDȈ o45J "'8Ij=KVsݦ#Ӆ c~F\MxrulOuF'WWiU} =#s 6\\ɛʅ̕B?;G;翎FWǷ3\U684\&~3WWo_YdS a `y-*OL-r4;+W@yeŅP3zCTVXCvB5DI*v,i [/,S`~p28'hw"i EM#} H-w\ ΫBOic!_oڢwf+B=" >P*V8=I,t'ES&73B@S.֋YgL>=qj/狲 K^LJ%#PID5YD-Q+ݽ㉓ş^a+Pa+d+4  :ap8EAEP{@;Yd% ˬ=vw7xB ][ x1:{mGZbFjKlSecsNޙؾϛ6. Seep+O d*.0ԓh`lnt{>NǹuGWm r1C:HQ{:DȔ ^HȾژUЋqwN/ Ed0O5>W<`DXkxH)HkUtg+~@B##NIxdy^Ƭ}C-wo > jWjIO |ܻ[aoeKZ ;6tw-U>Sl&oTn~>Ũ4.<D׃Mn ~գ>߇_kh(y `I=X}U3_|;cpq1;-|6}nȗ;Ki iIM?*a&\0!brN3!F3$"bdddG A%_C46W+"MY$@s tC ݷB/Q[Z{̈́%jKe֮0j&ﺁlc=NJ<>jK`{-bQBqpZ%H%H/AzR;BL|eaξ62o<}|c|s"(|"+<"?}]l}bY8Kiˈ 4>8lbv*yZK킥PqpZQ φS^LѶd&Q$ދOsDN9:##%Aj@zKk!XX%@9y1Te[ý$ -tsBZtOG5&=ijۡ׻/5r f&~%dB͇ࣇGN c:!#ftB gAO|^F qnB̺Q|T%RA\ϝ瑪 e΅$9 a]fpD%kR6md*i)T&c@B]AEʡh^bjO|P98#ap!;-goRs5}j+lo ^Y@ݲ^2Ev A`T.c!|FjG4kRBv Iu@&/d/>Ȑ5=-Sb\AZ&+UI"t-g/Y/@ǣuYڳկ{\=)Zr_{G,ܳ(&RIa]S ĆPYS=Ņ_\ ,A$F$|䌓>RP1TR{a.%N49fSwڜC{*9S#Wd H]H+hlTcr%^ō+ϛx> ]}ܫGS[r/our/[ōm9+@O=n][k[/r&>mP5/ObՅ k#xHB9;'P&(T0dϟKٞ].eC59q,l!6:B&QtP"JXd.]O!CrJ߹QˇNEI$VbtR"*-yyB$mAo꼲_q BOPE@bTyNZ0 $b&ᔳ%UPX 'թQJĸ$ T sI_cep.utlS-1tmE]mpX.cg3P χg. @]=y_=w3 P`v2omKGt2ޛ^b`0&K( $ᛈ7E T /`={ nn9lw9iL^1N:zʬӥHɁҠY]'}ĽxWIw~gBC/?_WXbzs̶m'|3+<$Z6q*fa_ 5:F,%4.[?\ѹlX@0.{ƬyJo=F|2?Gf B /nWsޓar/ɨL$߯J\DNIJF"Zx^wek/|VZ, BGǧ2EIȘVI;ٹ((EOSDFx N-qWQb5obӓK-L6F)Ba#յel9[$"E)GIm|-ܗ-SM}ƯWL/nY::Ib7: nic!_7^׷(IX'f!"-E2Bɪ0ZL6f \DU5d  tv)ۤ`%Յx}kZiSӞA):l*,^vsJGJVBmM-&hX$?ń~Rz=}**?OoeajV~7E6>lL);e߇>濱LVUOD_W 6'<Ǚb!7Fȣ-6 gm2AviEv&[I\^ov+ov?A[o4Ck<`MT+rjv!Fe4wɧNvm֫;uuJBˊ 5DդLDF(g*}R/^k~O?ޘ Q!yWM!+!(4f!*}!,ɥX2փNƗ@I5&@0d7uQg+)O{r}y58kxy}5 zm}}hxH'9ۺwmz99Ϊ`qbÈA_%eLRZz.$Eqlw9S]UUOu}#fnڀ!?Q,.x'jJ+5p $q- `']{^vmK=N)K\< *&dbFmy0ms6 p?}MlY7+['ߚ1[7NW;ݴQ+R[0EEL0&wl;:߽̋&neºh [}A=7lue¥qǛa@ed:Ӏm<K~WM5je_!E:zif^R/UF&y{7oQ|%gtysyq۞v|=8,4wp=ܛijiڬ;1b[)&Jgj3YȻ3ASVcL{z|Q>#Qy%bKV7n tzw9x?KG0Z؉hفs%~s]CxvySI \~ͱ.Ȥ0X7i0 eݮ9FtMw4*}J#8|!$F;HGwmo{䇐sMr]K׹R|+-1=ҚP뚰n^l6%BL 51 $$q>'$)`b#B(qF JR~.*IqUry+N4B䮒\JJr.*IiH1tWb-+A㕌*K乸$-}0rpWo]IIU ů;*Wjw+x#y96ϞlnyJ5wwSszA!حO̳~ױr|{+N.l4ݽOLЕk,n bW`CDnCl-6Nj) a=6_Z-^\/Ccw =š ,yjݏ"!}s־WU"IPBkFKY4DFHԆ@i(.gZa䐤XHMLc1>"'^qUlWr$za:m_ݰWl]>/4OUsG] vY" h Q%6!`6IYc*5(K)µ0h8z268. åh2G"\B6xјI&rXwPۮJ(f_nXzec1ncc1/' r LKC)`c} ;:mkC[iofаx@pZf%_&.;qI+i?Lʾ/8H{O=Lz?go]D_i:omS['.>oFDkuҥ0efU>F?StT2c4`k U cJ#68R$#j ;^sXe[Ѷ{t&0+5+͑:[|SZͿ݄.ƕ I§~&Gshg>Ikb0ȥ߮%mS_Ǔ_ #㩞mqosKkQgu;Ѵ|q|?bΚƲ. KVmXF˸#$6 +˜R;Ѕy,E% &bRc6(SX8 )&:K;,ue rpX I, DJ"FJ" 1A4`"2pnQtVژ Ήٽtj!-M푓O]koɄNfn(~~,5?lXj'Hzru82hɤ\nZ~WHZ]zs 5V3J5x΅S Kzj7[]hޔAf_m#gp< =E]ld[^RUMWtыyz̘8r=1 &\hH2+YJY(ۺ۶LRG|Nhꮛ+`1:J!#XdHkP5Gl&ng=YWc^y}>&^R$j~O>FJm W-LyN&ĜUb^4s !T(8>SjTш.:*v^1JoKDc雞r>)\D/1$L ĎYP6*.E@0Ż<_e`ږhZDȱޠ)Բ"مL ᤴ2Ac@o?ufv\ RkEɽF3REPb|FuQ|R^HvCɰɘ]_#ZuZkM2R-LTfV~?GK@RHM@+D%DmAu/(\fř++4N+rZ YP:IZ'Y4 ^%cBV;gg \}`PmoNIa1iF} zT@*h ON琴hDa'JSea")#O2( HQ8~'`٪wW &RH| "zȲxAM2dOR>KsN3Foc ^D'OOع]A?z 4Iw3vDEm-zs]LP CFxnyRKVbD[:w5'z65SCgoݧd dYZu4@a+I>sS/˿1=b{j6SkXGJT=[]ViI-]*0f^/ ]|6 Be爊ʥiGj7&"]Ӹڤ]:a lf~ujD.v`o rH,܊cs3HvrNA}3"Rr3u%x'3U[Gi,Vcln<_<\(APJ$er:I I8(RF(>9ʲ(3 MPJ1c`X"B0A`غm+q8F9^7%4_u/ 0x"\`:tsEVN\c|9[vO{_PxUN0 9I("z(Z.FaXt C(!moRQ HM|NT z;TdD1d! {Qe˶&eC%EđmC*5l}f춇 ~]ơXfqE--h[+G x8dQjB 0ƋUbd'WU@b׾GHl89տ̬yZePTQZsQ@!e_TNd-e2PEfZϣ3ۛn+̯>^cJn]ڋKO0l͈A,ebJhs !I.g#5CIId4,G7u ~i<,~أy]OŚ -bm*%wly駞~1|#0G`0x#B{p5Aq14b;]0We.zҒښ6_u dն^]xY=wVnHVh*?R{x*.m @ -3c-pdVVջΗMg;y=Z/> "2)2=jzɹ_5*eTZqnr-;mb]hNFu(Zz -}o_4nP,j686N썾\+JTPZ@M)cN$m D1Mfx^Ybao~HZơU\7x imFQ<e}jJNC:_[&7?7'NQ P?ȩPSEPco6b 0 DeObɶsTOASN P&.:_ A 1&jS.΂MQflBTY9")䳻f6>e]yz6W糳A^XAÅosTﱔjM.d~7iLs_dm9o=ꑅvYw+1- јz+ռjجf8 (Vcdb%S,DMHC6# 9jgѢ'k_T\8g_zWpɔk/K, *gm5Ht:WLbu"BGD,XJfކzhT]ӉGlg'J[iǼy|+Q͖EP;*0ɴê::vX,nc;7 5ٓW~vxKB҂RJ4Wx2WU`“1WU\gO\T!sUuT̕1(n#z=ngwJ2]r:?ecnO|‹j/Nvτ~{"gח奉Gs“s ~vp}6鴰gq\=tҌ{voLic7.Ŵ4/H&Ty, 9< r{c-gheU_-Ǭ#KDќBgs(J[$8"bn~}5)Ox_{.k# ?V~_5FgWW*:g ?(ۿKvea :H-CDlg=3fyĀu 8ǝ>{;.Y 1k*:g5vHzD_| eOiF<(*ShXZ-W4fhޠG㬂:֫{2檊kOXn`Gs>ND]8__)g_^7w:Z'.#ˁu?O ?%/Ve([e\Jrmf!w1%kb,ا$6Β3s$]K\BJ-td]`gJ#q6f%. Կs9\2թT eb6>`DHbSHrw#H!bp!9BU98lc 60ڂJ2,4m:Xt*d"]LI)kr $yGٗW=[G𦉯gӿ9=.w31;2A&?0{a!Ej tٕ䬵&:{q R'IZ0l`1 *j5r.TmтU։L!aII!QUG˭eklTݥ3 A,~/V2OG)fa.q3f7Wco=7f}}ckǞo S="s 2G`<0d#BWAq`Pm11ǁiKe4n2E!7m_`dhckz~ b\/nNi{>tp;@^#t~ڔ49+_& Nl dƫ˰Ɇk>;wۋ[3lR;W]BVjc,hZ/h*sxygei@F)׶8`.sgbS,PScٲ";|jEN߂rîM#d9:ŝN#rEa !i!Xc3.ͼW>;P,Hy3y+_?4EH+.߾|״f$5R.rXKt@U#I:Ů P,|!]~HO}/{dB5A@ PAEwspI@h%#T<0TO!k2…;Tܫ&񫲸Uw[v(:G#XŶR@kHhɥ@ B*uTΖB׮9mul95XA dTҬ#"j1ekJnl^5: Ϯ%+oi O^H^w~~%/8֖1S;R#.tE}1_)X 8Tx@+8fiΞ9Ol_uةu PS08C|@9QGd*rf6@Bj0Yd}]J]`^Yw{Z?,K rN5@Ѧc֬r;twf%;9=DLnf7+Wܿ$fHSMlpf7\zsitl#ך6k 7Ԉf/8S̐?6^^\f;Jkmi>ul/>=0r?^_~__^l g*_g,s=-ûyC=$Za[m*pvdN&eвm͏k¾~ BAaܐjݥEQpAiTXԑ + }w KFK KգMse R%)j\d$$kcr]jZ%lh{m QUͥ *rVŪKCŠq9Hˁ֙x9`9ۛcQ0Ij<`H\ȍzG% [FZձVucfͫ]:1+N]JGJu$Jds/zJ_1/|lPHr=8@WfRRP5`5``5`RU6g\:[PT-~Lʖisj}kFqlŪg9't ˚q\58WW+[ ك_ 6Ǒp=z)K?w G:p*/䨐V&ȿJty$G#9:.9:@&*`UEY-YJӧ\jm-H!*BP{ ]m%)siu1kmT&'K{Мfbr}{C/W۬OXJUjDh39)Y>};gof4螺8p?d9ybe9K+AtT f[Q)l7RCTʞJaQQWih'.[OW]*XUHJILL]>\O^'e&mER^I 9F䬍`j0S.)Pr`S;շlzx4C m7nFdh!ЉQmڄ_\z?'cF|بhWYB|'5u|?="M)D]@&灸HJِ|\F'Fy$cҟ~VRMfBH6 tmg[s%xjlͶ6{͂j5r.Tmтbm7ڢ1+U;nlYx˔?߁\x#on/7,jв'tW}<_rHv+9ʏ/uIb( oJ'#ZbUoSr,0G6Ad2 dU-(1bD)V.+&%UMzK2g3w&YW?|f5f 3yŘY)U}s JaCjH06 dӱ$Op$Bw*xrPFJ 5Z- PM!+ds!@Lv|X"B)M?y @.WMD.Qȫ1`5ȟԛ"v3g3Els).4_.jY˰ ) 5٨>/ԂT2%U#t"xmqU9$d$ Հ+#r!w?8;@ƺI+J En._s 76Xa׭!ͤAǢЕ,A$"V[jeLtuڔ#^\4Q > _L}۠gO>.i|/L[_KOb}Ijd39BC(w`eR.A[Ή(Ts Uۚ5Uh):VI}/KU[&$Nɛ:ARh9ρ@.J4J7 Z@mFb wmmHpi\ڇCR[r{Ih@V&$%5]"2cIp3>8 t5 YP0#s:pMF]OVɋc@vc,0l]Vw^{1 fe^}IV'{{{+M۾=%-s1=tr 4??N hҬBd<G&E} ܲ7ʡv0Y P/r2jk1K.zR*>Is>HFjٍϳD{bduPMʳ-ږ~ew$vW XOeΏ;GlBN3!i'o*$X VBIdbk*a]AdBP?") lJmphY81e-\;;L'xϹ 5:Em^yC`U7KD &r02^ qQ7/7+m| Vq !LW#as&:_;Zwvaԯ x` "V"!bCF1Z1rȖŤL3Vg'eY[{ rƴ2@rG%L-$MFid^p%2"V @>]R6,iX2h3&h|>H"oF[:ߨ^*j알ƀ*ce_>~4 J9ꎸB6 Tr&x(8 <8nxn@nEgT2MI^*1F A*GCN走,e/r[< <\EMNg$W K %i^ P\ہ՝|rC*R՚{G@N?^1-daA\|;mC.|ag ]5i|D|Uh!up*\ Ep*\4VYnpUh.ZVUh.ZVIEp*\ EVUh.ZVUh.ZKMsZ:*\ Ep*\ H d|ˊkM2I7&$d|oM2IB5&$d|&$d|o6&$d|C&$d|Ò&$d|ooa&߂d|oM2I7&$W.dr>+ 3,_ ݔޝ]~pR%4y{*.ӧuV~7%D^뺫Z}ޮ_Kw7 {MLdKYAw6|zfvf~Z8:ЇӁ2C=ҹ,t!qvD|ɛ:uҼc Phy73 j}v\7,g| F|yq=F`oDm [lhNg^ͦ7˳ъ?gMo&ݽǡB?|_PըzaCD;P^{} F d|f>m^ִp6LA5q se@ύ"+ƣ p襖I2Hr A m )Kn&%'A//uP&niU^S֝f͂M'!-00eزj0xX{~99hJ v 2M/.~ (D 4]v:,(Pd,T2dl2`0X&iUK`lFh#p͌dL0LGϤHY( % 6Jue,uD2C-XCcPdRh]&p QTjy9 F͖":ەOz3'&?@~zQ.Et`x2ڷv˸x$v Oq~bF! Α}3er,h(8lY}5[XoFXR-6RQ82S_Ej{vaway_J|^5zӼ-swٔ2Yfhm=qC$3D"8r@S0$6pԩx-G=:2±v[N|>r1?3/fNÁ =ӣՄyS|59ܓfejOUwi?NDVvQ|X"abIޅ37Z K|ةKё҉}ͭ|MEaȷ[ѐXOzg 3BErHLVI"C\85X'·ؐ5+-UؠgdtI^{UʅZ a]E[s&a`B2Hu*WF`ݠʚWʝW,c+cՐ3O󙧏y~zS`Zr)k(֓f[f$6ςlЁb7'C>es2ۉag;1=l'TR`1%o/b*Aԉo.c [Q*J_"BWh]*1e:D,1>}J\ 8 pA)UҘr&amjyaM䗳W{')|z=iˢn5Qh_W-5M:K( wdpQ LUTɕ&ZvDJYn~oӁ.ӣib:09@4XGg ,"23kл]AtkPEh%M2k27 0x6l/39 6(rrQ笰ĂXmfo'cMM/AQE MI) e^q@=x2&Ηi#Udv`T3҇?vijӀ 291"11M|p.ŢǯgD$+A@itlc˂kQ#2DcTf< Αk4"MѺ̹)J)Ej-BU6EoPQĞQZkFW}ZyܺΟΉwT/k? ^ 8_Wx+N)K^eI)Cy]8(MzC%o髕۞H_sh4::] x^Zyd:q<ː +} :`QbF7D| m#28)ٚ{q!`z=Ƈ4OdveL3<=5鷏'm~m=8~la4< Y~}.މg*k_m!WOLKmcZ'bZ{8^J9Z6 kOԇ  X k%Q,X.%LtdaO(־^^?U3~-dEqS{" wexƜtSl11rǔL.CIR(OHwSd\3d `1 #6Lպ{1 Ib݉샯rE[(\/y'ZO=yzlDG0Aw3[W +vQu%qøb =' ++V%&GѨA䍄$$-S3YA8/źJ~eDYdF8'#G,cJ2#,"r=|5֝ݔn5_ہ-<! 1( A%mcf9&NZ_&mCLAd)BN>gDD2O48L%UA ɩ̯z>阘WƆm>Q?0%f:VXN$YQ'}pB1{] 5GbDXtWCv DP3%F_%r6*llH66K{4w [kW( 5{z1-h9p& bV'D-fW\cQ !tY$сd9 U>|ff~숲o,lT(Չ( }z4.54Ęr缵S]HwA\4..ggg&|42V2UfrHZikv%2qL,e6Ir &ԍ3;9qw^Pqπ̪<VT,K#GgrLuefdt?;:;"PްU}9/dEG/nJgW~X? =[&oё_י^%{4ݾÊ4?'\_뺫BɫziciAvL3I=+w!>vwoד7Knuӊс~DSZ ϸp,:>c.p:K޼tn]Vhw\oF_WFˋYy?oP볣4fq=̗n0oe(써}MəG4Goz5.F+b?3sGٗ \ 2M\ RN#+9͗ki}9$KLr$3T,ҙY<3Ɓ,/T LMX+`ObCmPsӱ]EsRDt)f}2Sf'Q TJG L$䱌`Z5:ƻr6%ڷz~cK MW01XE'm[9rJ8D捋$Dlf;ox jβK)%\@bԸWZnIGHj|1gڶqLż502kƲQ˱?`.-Y}LIveEU]BUa=$M*7 =\S_ cT%TE!BhDCBhg$S_R|mkxbr\ j~>fF8b~"S /zy%tq>}|1rB'ۂ@DEtnmd70io$.V [h|$9KL8| >W?V+>3݂m.UЃbzLpBًD6 ?ƫB<]^]|Hu3˫?o76zXuV-@&U%B J ZNR7Ug9y(ːt4{1J K mh-,ʂ)".7!s'uƆFKЌ2g?~?n;ZS,^137bl|fL[1_<9zbMSs&hP&5'ɩ`94'.>qa빕 =E@9fkJM"\f$F)桸3IWƊAE-/I뚩SUKJLs9{2UlRWߖ{7HϜNR?vKW/m(Fyp| [?}ŷŵ|;xfqYxK_\^/Om#9[M[\GYMԁK~̋ 1}ezg/￟=~y};g;9;-۟zgN7un{]IǕ>:|ۣuy1Lc?E wyT2;y{浡fg9 ~iw_E%U_cn37#n6'<}j0t4<>nmpϳ[ހuD~uؗW[HX W+ZqXԋ/'|]Z=/<觝5Nӊ>ڈxm77±noW+926i ̮'n|9yȾnt#yz@^b*]0HĄ߃]_ћN|bCo?1xL'jw4a{<'Z4Jset6٬9>ZSsj}Nϩ9>ZSsj}Nrj=Ӕltmѭ ;=p Il26S l/lMdudI:IIP\I` mh* a< !yJۋfGWNV@a+b})VVO :jSCs{Cj/|oۗ6e1h^|`l4/֢hXIu)Ss&NL \#7~Wy=_jp&LwtzNHg|=!XWgYm秋wn9/guЋ]_O|k=Ϗ_4מ̖oC^9y~ӫ_Z -9 9R$!+v-K٩}y=TB(=I ll"@`g9gn?.d|MijDAG'ўے[L}}mk~Vx&:W{137b. nnK1rB'ۂ@DEtH?~L3|'R3Lފ4E5$S&ɡZ2l%| >W?V+>3݂m.UЃb0:'\Mds9{nl|ASgwWm/ovw-jSOau׉>_X/9s!%J-D'MO3]|@QTMj"bZBc86(icL#wՁ,6C>keI Z`ԒrE!l9{ү l9P/ozP 1d5W+,%"〼Y ps:.|]` y@ߚ7z8_J !C<(5뿂T.f}Dxk+ZM3lI3OzE?BHk1Mˑ|KɊkŚ̩">+nCB?_`بfޫ|)Y+I}\jm*ɿw#VDZԵE*A-!Wt) yqVL\p8x3{۸L}kš8߰!\K6uB>Q\Zӷ0%VxMwP^iC&S"=ƹ{1l<;`ȿ"EX6S)wnszK On b7*6 i[\ɭdRoѴOpb&V_OAMNY>7@H9w^Qo7ُOc[mcAƋ||߭>?iudr'b',IܼV\ݬ t"=YkXOvPأa7{]<[}eQK3y;䞤w2e/b]3bjVV=a(mi֧Pb5y4x6HN߰&J&p(}{B=d(7>d kh ў%`7Fu;{{oUȱe~>gZR2GPeiW:UWip)3)np2RSWߖ=6$5VhmuL^/e77m^c:cX 5 Ɨr-"%[wùlZp6k `A-9ӗcεy4g!ՆU[JR @M8̜ݑq;JyQ#㙱ЛЛ9h,Ztup;5Iꄉ..mrTZFIW$1hr5,h-H01tGi(λ6}LDaMj7TbciS0:ʜ\5ry0vxnԶ`Jޑ N6[Q)^%bQJ x!P3Z7v[ 1,(CVdhAY4{Ik}42G*wJ%a쎇KRCs#爸mVAM#sje!aʡVn)0`hDl $f}MZ3 Q2yZr<% ܫGaV\:ZW/?]?;( lt;1 Dxheh_?M_fAXg(^yoKy~IBCYo|fv36TYgOO>ɴLrPyJJb,{\u!rk $_fKS5_1J÷a3z)Dn@2`Dmu-5iCcU ~Qzu1- ˫u{V/t u7!BH& $ćlEYɨ*րEX!pE02`=, `Z)+miB(a>ˈB\-PYM:A6.!WAᬜZY#i UAUc`h֝ؐalS㫵JfL>~8}ݖ^;Tm#ڭ㥦7[gzӭ]x%)!\@`xE!@K(P2sE5e4xfi3"kC6ktY l-1lD:AGf%9 ̈}c+6Hp Q'"6JK%x[36{ȴroql8´WaꮧWoj,X? rӑhGV FwfŞ{ms7] fjUwij2@V{O͸M:LyWwޕe€J&iǃքcdpN_#<_Fnm67gdC2B!CpD6 H$W1kb2$UŰ5V7sF{ (~34 tY;hҳ^XWdM oe.Kc`N LYE#5*r(iƱWV: yjQnw=CF132 e6 eBkϵֈ=y2\Xi Q~*ՇdΦCH:$ VX`N'+&NđQ-NV;j[?Y) 21ԭRZBT,Iatli JkQ̉fBi  `Q XY'tiݬ;АR@y:v*Vd<~.2ӟ< 9yT' sD!Cw}j×{7ݏ9}.XY%:9[SԔ" *g-R$P #e[?/"?T> .dsPBfR5x6<'-/NbH%ZQK6zO%lQE3덫޳G&sfW~ޛ2٧)?uqvi(ظ\8.N&_'W"z+ 4++=! - jcɪT6wvM[< ä.HiUaS[-TH{EAsG'i6M`l+8ecԖ#j v:2߈Nz]20Qn0b`Տ 5a3j#y#-df؊:;ƚMV>Z&QR*& S>ͺ{~W`ZDZ#qD]o}Lj\lE*+TyPXٖ55 u(aM1x)uJ hJRqEٲR:}-*p(j֝xP*א:IɱqQ8!xxG'P=z=(V75:-d푔IW/"fQx$0YT:֔#*/dcc Iz|d!ʚlHR @ LNm0Jduim!{\RG8^r6^v˻LZׅVjWo!mxrfw 2,wČ|y嫫zVISsS|S;r~2e}m'ϸuᡸ'2zZ ?X?\=)iur|6vz^!+_^=BsSΈ&֥g*|;|?{fvvϿ[8H{ki]⸽rrouH_׽Vhޞ~ ly+?'j}lo߹OK}8O?-]/33n{?uFћX>ͮ]ng+:~ꥱ}Np䂋Vt(ttP3hdȑXnXnpOX"rleEZ=PK-rXLMSXG9[Z@S"(Tˑbi 7Oi֝bOC/64IEoqH/|K(>*?4dJQ/(VK1Z,f}9. L5vj5k֫Q$ MQ ,A M(X,&AI"2 $Q\,{5(!YPCPTP>;fPxW켵YgKy塔" 5> $ۓ+F$oXPZ]P}t:ִXRӨ-Uf@/3dT+ I/΀*L$?98"#*RHWFՌjūh'vҽC쁚 +C2$E6%$26KaC.)ʬ>Icd.LA8UϤLz&)&>hG8R.3QE0UCL\D lib@0-Fmk#}NڈV"rw[뀍 ײA:܋|k< xGhdlP Iȱ8ME]X#NGpoҭZ^AHTCLqvp*`$krԏV<7P"w{> s2-|I.ɺR`EP"gT02yS'ԆdrB~Ic>Vz#()t֫b1 ˀl:&́C[/&Nւ` 䢥ٴ -RwPg铰Qx"Bf9ǜ0voEiHGycrA[29,kѭffz9_!jy h̃ۘ1v0v짅a)i[Mʤ`Fuh(Rv1Yʊ"3DŽxf9p.+~?8P(vuOtx"x: #3<$Vm a&3L Y=BOeWzh\Gɘ`ȣgR`6RC-T3u4iy#5)?WHj VWL̄l"t b" saӰd=oޓr|3QA7p"u}f*K&^tTߌƤ͌cLNap΅~wN`p!__>_v37[JNӷM廄zPzczwy0#T@_d%!(q|2^pd5ۅy5{8&!S%GګRme62RI71#)yΓ2>,zqzEc>mL7xdžBW WsGLNTzsicZ~:C89aА^U%9G# 4Y# x rSSwrۊ4\P$ϧ,A>I,%<ڥ{0I/m^M}z+9,S 9V0!fĶ3T* &m`l|7UjT}+rJIhitm\igϞ>;[8`nGp譡[/ͧ?CZNBc&p^6A h΄S v 8OxᘈY9L2ds%QbɹB/8pT^>'G&GU9p-O1D|U-wPg#}U%Ҏ11HN&%R:Z`EO\=o^R@=M4,aY|t^pn[Q7;o oNR҈.ʁ0Cn<654Ng Ơ\ڷ|[}-߿"+GEV"x1%(f8 $.Ȗ!j0:ZKI:y文~mk Z,+v'M9p}nu%}>=}Wԫ3HP>sGS TgtAMѰpৈ.uó sn|@ZbIT#j]?4$Ḡ&bXVKu:2琥V LFDIk60P$Ȓېg@­mb:2}fgٛ8֊x D*Vڇ]0xhnK뫳阍:n)FQRA $k%c.:2U-cTHa6K$c@&oWw9=JEKCQ،C`(jɨrdg M zv<gk_5z\=.)ڈrE\Q#EI?] xG#&bkflS d oShMp\zpX8!5OV[B" KnjY()B*^qAZ0Hː'Rr&s9*6Aqgol9N}K8^l#Kx2-bw_N{AMfomI2ʭ7ӦmGصt!= @5Vo=j5}pJ6ݽ#JѦ-tͥ/@h}kvĠc.ouYk21f潖kv񶫦ޝoќ|v煑7t}#N7y46߹)~u͍N/< Jj6tS6L\uޡ "(6qrϦ=; d/@51'U,^60K!ss3D/YLVhъY[N$A RA8/(R1md%Ef)ݝO' xLh-SfV S mK,`bVXN`Y>8sJhW׾Řw$@ $ #$͔9e0LyKڨx}1R[.lUomDښs>TwRlGifߝ7h!jƗ}Md(ECg4Z:^Մ)ɥP-77;;4|~u6['͸a@ƴ0-x_rprZFeK| 7|qt}Zpp1*? >ppE–`p4ɭ]痽ma8<\_dCk;tD>{Eظa~f#lo.s6Wp6}Qk([[-+ v$gN3 9LWJq(}m=3P>yֶOuW$Sp0 80]|*RQ+tWCD[]䡸+-3jI)dݕn8*i:G ]i޻"euW]!«wEpHG&F컻7]Jw%\Sҿn utN?,- '(k}0F_GJ~&S sv^hKs8OF8hM/5S]i 9riOÖ`Nx=s8=YV0-ƼL>_^u#3]N:x{+TF9"sMP5Y("`>c[$)/ɉu}j2h0.FJN7Χܔ>>X˓T s5])$ujv5viLy7S|Za;H9+x*R Y#WH]}wWPUV}wWEJ]qWzW2ΝAWHss@?~ǏET<1oq`fF w7 'gmshPIa  YfU|B)`\HAsNڳŌVt@˾ke |OS-T7&ϱ239l:ysz o+Oc',;a/~!Y@nfY@nf9|MhLnώ0dR\'eLmʄ80qE8BPXnRR_/zU6ȫl| iG.%K5lSkjd| o 'nCFqy8!Ց!m9=}}LxVkkj֫K No] R[IҝgHk'Ǚ;BƎ'?/Qmt: MTrTOMCLޗY[J"BL5]0ICj' ;l$cc4-WR f-76 w1f Px,L g;9RLDzMMBllIfl9}Fq9FXCQxUdZe%-\ ! qXv);yW34Y GO&[f0x}N|c,8%F(4oKK[#pWD1HLL? gǃL2]\yH3|}tqyrğ yO=R&~ͽȷdv ! 5@]mDoLH9CHj8cō %-(ǚ9ꁰ W adSr.HNh QOOǗWm9kֿqC&XF9|t{,hC/9zM©&hO&`BĹ9i޸8 K]ύ ,h%̶`Dkʡ&okic lPޙjWƊA\ka2_ PkXjJ =1y9`gز8<ܸFzd3ZJ\L=;_[Ǝ۷&:;tmUgg'ȫivK׫Yx/}S[HWvơ-k]Mԉ{_q9zﭧ֛y]͑ o~-GϿo|%]CZnEkoU(]m?G;ӼۦyI[u-cw_tGiK~P^6tQhc:ta>JYX/=$~OeP)'f'++f uK$x' --\dM*:@M@16`em@IW |/SUzb؋I9KB6q[?2zypu =$-CQ$2F Ƥq' syu|5$"Cf0F-R1 _dٴ;132|rlr})e5lk5a\ٰgK liǛ U/Jm&P؊`)!y9#b.<.0yz~g\#GRהH՜L 9XWf5j'q+:9;{Jͅ iE/cjM"8>œhHWkfz╰V[)=7)+X⯑Z>΀wZi ^6*Q%\ΖQz: y -Ifq+":uS|LΗ;m. D{iMnN6a֛:g5Z,b@21=]!x0 yw`qOBZɴlr4Hۄ97r<`z+'ܿhIə00@y:._̔*` ;U m1yyjJ Xq8oe-h7>Bi;JUB*>HbD/m >ݚ2 B66e7|+siW[kq.fgQ~?=u0C)zlwg>Tϙ` Y p1Ŵ2@ 35N6W;m4aM2b5vivr?<ͻ xp/WzG~&Ǒ&25Oy|ь 3w|8%Ht8'n_{nYP,@ .(5;cZe=7[KhX c o49-jsRɉ.lP8%2aq|,^>V8mavm 7xpVƉ<_ * 3\AbEhZp6\b.D06!bQ4f#bi:*(Ri]Z5TKѓ)HFaَY$8b7cЛ 4u57wY\J`OOWMX(DvS5\ ~BfV]Sj[H1S`]MD%\A&I8,ШlGl>?+˃a㥨mQ{fT IO@BUa^kMZLF]_ yWuc c2C EIT:Q1J? .rӠ&8l8pօx(8>^n0"gDd*IM8EBC#5DBB&ڪlP(-\( 11:] mVg$p )GKdAz>`D6YqqqzHkì䥸qϸ8&7RSk a-4V G2E[8>pPpu/M+HW6DP=P2d=OSP)ɕ^ $&(ʙ E^:hr7 Vb,;iHqph=VEmXXNy O$ p3r|xGLQX,,IP3 !j@oJ4Bq’'l2{0>;g}H-cqdh۠EFGv'}ڰ?g r_݁W.zWsK>1&a.OMmҤBKYEJrd[ba|?#}H,8EU:qߜ*7+Xmx0 5/{jHaD ^kŖ>ߝ՞f{A5hjyo9xי)9Ǟ1~n[±e϶df~R~zrSi;YѼȷ]KSY[8DDHyII.A $qj Y$~姿]_^x5'9`'U@JRFIwݟSB`{:Np%ם:m++:鷣ʜR ֯_[J.LR;|'RFuՕ3󧤮!O]Ur_!H]UjiN0S)WN]]і[O?z+mqu?_0g_/ۿ:nOof~?<X޿K^ZIef]vK x+e'!Eb(:H*G$m9o) x8a%ZH"1svX?;]o߸c{[^y~6Ol%5wNa[Ԅ(Y6^VXd~\WryњD^zR~=QݚvSy6o6 33Nt'|l[eyK>YD됼$cUAa䓋|A/Gk EYQӒ*d h[@.Zڐ/:ӠLv ̀S(E%ٲucos4N|珗7 ټToJ-+IS]qdTI mQPF8F`Q1h`l4Pa6E٨ e kթ8: JfCAʋ%DtP/(F(PFHQQN*_uU#q~dqRo /bdfc4Ћd+Y0P ŦQI Fºi\f9S /PU=ͮ|'W&0j_-!8Lg:_qtgf7SZ4Ik/˵9:i@\M$z\Ѽ|t9ܓ. Q{,.<0I55\NzhqfKp3&m';-Lg~h 8?8ŋy-\#kė-Mi8^4mĉ'[L Rѯ>PA AĬ!OF'Άlt @ϰ'α˚x1.ĺHQ$@):0LH?ѢQ5Qzc5yo8yn5%9HՆpjsTۘ?[?œ^\G/$/#W˨=tQipX} wzb Ӽc\z*!l5Z CgNbq0+΄FEKp=x0y'?Z&dEBJ^1v5G9c6D%{6ۛLn~cDb*%c+?csOp'[s[3}ܤX&2ba-P-,CEZl'Yẕ)_u~/: kl/g-YmRTT.*H2YX"h@`6d 㜡q-l(o}֭2H|{}) JL5u5;BBRz8?6#iFIHȀ  h01C8_8y G=t G?>G G]y|g)_Ly/l W#9"SiCޙT c Q^OzУqUzq'[46e\neJdp8yYt_E l)B,$A%lj}<RS(.4|+=՗I&W솯+[䇳EkA۽uqzuڃh*~ͳlD=YV}g?X^jkL;Zk0solGu$,VoG)痟o*:?uat )~hJYbT' ?(mvX+;^{uwAN;r-\B,19I'PA)M4TZ0P! ;]DA:km,*3cȐХu-G3r6Ck!u1:H>mu^elD@K8?OH5(>w[FikD.+ $ B$Ҋӊapiv@ivL+9YVH9ȚPlt.pTBr^.m<Eq4Ud<9M %l@y9`jxdȵ5#g3[?_M.㷁&-7yH)Ȃ|>ilb j34(dLYyg(|dHo\RXsJd^X֩ +23s$Bc)|=IIRq^s@PYx$ǚtZE -[ex_d"}p Y5YT֭QqGmCD<%#$#P%u ˢeEamD9ؐd:ڌhD~ǐ~zh.à8Hǁv>BB`-lϓP0!tEeՁsEkxc7k c٘`1ls(urbe=tTGT@F'ѡ <.nZ`)MJ缵.d`b&-ՈRZyfyfjyfG1Z B ٠$IȹHc/tw-(1Phgv0sdrú3<&gjz+g!♉Ald_=,}Tt8HȐP`?nBp YaeUvFp4#)S"P-BKZ BP ,@BE)8zX'^_ AQ)fw~:T?6'>g,`taυlUJ@ 8P%D-OPd̝"7 6;8P/ |;7o!wB3E`ҧ3STfTjWu tH8S ,yWbZ>%lyhs.P( xrƝ^Q;)xkncDZ%[7Iyjj_f_>Hyd |r}iď_{knVzBwRv̭D|t{.8I %AMdDTLQ\۔ ̑ Bk($̦ovUE_>FT,!Esd!㒷$NU49WN3vג='r1ܓW,3eX2sv`6_SR^n ]oo-wsu|Kqr]U>Ή{z7Ӽ7CeCޝ>)sw|{@c~+s[rg؟C97ϝ:1$$0uc%! y] 3Ya 0N3TF_mDjW% /~ X%<Kj=ʾ9* @:j?M7u-Ή26v嬖 [t;#2ڃy8_>,M? 1mU#n)Eֈ^R!Lvhm5^m2%LZQR"Xzʊ['v5cT,i}T%5>*-\4B[ T0` !*+ATPf{.VQ:GW__zP\0%@ .BJh["F'v@XbttYڦ@T[^\ |k Ggٙ^U1qlkp`lʉ% [+j '-yv88ۉ+dVcf*$ɗҤb*)ldKIRR G.cS/1lLr?g$6q4_=Dp%#/}vy!a|ԊH @%>tE*dbSS\"+i9E$ƙ@Yx[z]F;%/z3f":_|7۰,dl׺Uy1hQ޷[Mt>IĻG?ƾՑ;YȩށEMXOɐa>ϗn a)6yb\|+6rC'>o?\_: \]W #= uPpár^u4ЮWz TqK.er{T> @Q1C.ӵ ©6#Dtּv Mww_Ƚ>oG(P]\KscM| @z91~eQR6Ν_iyf~߇_`08|<'}>a!?XҀ%DjKl’vvwvFwvwS̉ep,$3X%m@ ʩo5\0gThd}~NV3ĺbDt ѠTfd,֠)暼0dQE @+?;~ʅLҩcEeAaȸ:Gl[k%.s6f''nۦaO<,O7zfQJ+_PQIi.ʚdsUuF(NDʬGj#G`UBrd#^{KZ;*Awd+e'BBXuCۋGllҎᇿ0+*r+ \s}mwq57x¡(Lv ~k@dZLJGche#ႍ5-lZ9Va(k*URpEbS HA!H b^K[~>(q2|(w=Zɐj,:$ p%JR].x8&7}[`KDfFD9"%".q{LZ@MOBPQDL"Ȓ֍m+tR>8]q`ug60Μ `Ӗ KACj#b7uGĿy,qqsC:nVrj\q.qq,(bPFs S%e@.୧KkG-藸qq.u;'~֑ElNM?=+[X;2ؽqJdxwLv|7_?|a͐jI5p y0UUTzGw34?8#8zGW,#=ȧ (ɭU$j$*7t煅J&fm+uќK1i,WJ9˭@SApFm {s@4Ȟv>n]b.A3}Z&}j#Sh9g]=yx,xy𪕤YCwtq[z;fףIp9Co'xB`Z^t~ZSKzAz-vNq$d9Mu@9&ڻQGJdM`Bl=dރvDTsjsc!0-Az7u{,:OޙVPJvK=xsݠkmZ+mA0)TuRLd $x2 :^a1?!nrC~zyf]>>Qk{w*dzЏ(=]mi7[.F%[ލ}:{4Q⣵G*ߓnD(>7jY(#G_p G @)8eaN'X~I]7%WC2%"ÔHO|vBV8Gҗ{W8?Jj2ҷ;/Z]^c 4={ O>Y7@.x^&~ySy8<Uٞ UeWhn>,]]lȼ;\>n#;stF6[ 18Ilj5ܒHM(mĭ,)#_5N]= M4-fLdu+EDHq,!6AR$+%Y`e@\',OlUNs*i:)u ϼ[+J+6 O5G[ĺ[Cά$I삗+VU.AY`M N78BN!ޅ#.} D XrƾG4)w9lV1bj0_1 8<9ˍPL#~C]'<3bN0}fSO\}[HY$6 7&Sgc5/+ SI \"^*:4QBb&ub\ 2qQUM#O}3/[M4Ȕ"O~^2~s0-Fn F1-x-w$ҵL:DU]w_*5Mei悎cb.H҈bvٕQ;{d*߲u`HכN޴-tZg>'Z<Mj@ E1h# [< ?BѠaN(,|~g=>MY֬bʹs#;IrV\Zo"h5EZz\@F& ]mo9+F>oMBI0a[,(Q7q$GUw;vew.`TUWT$">9硭+SeU#hs)8fEzvEzged_4!eje)  9K DέS1 wZy8ϱ:#Tfezne:9[kc2:EeXIQ_,c:Y&(|cB<Z&ѳnT7s7JunlwVǍ;w|wƠ~)$:r[ErDcBj)!R` Qf{v. CoCuhYs=5)jƚRZgT0(S0) |8Ŗ /ё!mj@& sІPݗ_zimۀhy%уbLXs} mz_ʽWEkv-_7mW,K팴 iC)oO>^6 [ TE5w V .A`e@,E%2ϊ3bȝ+II6=Y.vKI@2b =(ڇTnPYUEG,s v CY紱֥lc"(c+TQY!  g{О_ՊL8?P_k,hEp>>޹|Hjo0C~~D1>V{H%.QG*Tq*$NZ$=isW790'fwhK%'q5kSU)1TmkѩR+猎| ;݆cki_@_Ɗ RKtbQl\R5-omޭ;p ?v$lZ26޳ev˭W>x9s{{nĽ3l\F7|Lo<?0O%}x͜ۮWwMލwl~ػuno.+|<+AmV q/ƫP˷xBEsKw-?LN+qtP#wp:ne m%&*˰ ;qEdo ]nJ;J}'}g 㾋mhj.W>F`J[`XKb2a&kcUU%VYZr~LŧmQU-̈,I ;S%.wk'k8۝5DZD+G-.*$JXo }.8l.U̮:2&E":D~A)kXND?0) A`!]b-|YDUjJX-9YOJѡdUl%A|f\@kʍGƉ&4}rHCMF%U'L]Fz5U׾\w֓jR(f(^9kb-Պ Њt fg]pn-hfw]{ᾝhA[|Hp FZ)Ⱦ py 5P *C hh 9ep$!qI:<̂4 meAΦi|/GRRApV?=C9cY8쁥lPbLypfRRȐs!g#C2)I5:ujKJǠ|łbZ8)c9R5SdHOq<(j&eu63x ^mH- K|PČuqсБW`͓[/GO_YNh@qY +38a֥d2VYj .hr.I!(@BPn}n&^OO}k>mH?4gS#X@6wnG}w4 {> l\]}IMYVSNLkřpѹ)?+o]Ьft/7/ٲ>rpn[^=vo0? l9[_HKu9mݤO eHA!jLĬ}ًy~fy( نGal alm7)g6WuqJ{҉K"@Ր N|D'88NK y0sIlO˜qf*P !njlF3HVck\5^\J?{ @sn+Vϑ)* q +ܻAlw\ R`R<(I卮¿G N,q+޸^(Zbऌ!o"QKW9oEjU e2Vg*igo.c*e  5U)Z MEdY4%U Af82L4!DeJ'*EYϺ gG= PQPϏT9= JBAsq>Z6%B'rlttEX[U |k_埩8*"XL*@XcP%SrY Y+j 3'~3\gt%Yr5AԱT}Krlj򬕏9bE}MY\:fhfp,#* E&n'F#qdH@hЕ8&ގwCc$d N&Fb.cn`DUY]rd8RvV/""c% ˋ^LEȾ:+ߚ@)6$Z*o7@xkwtaRO=u)hG7 W;vk ^-šDr0^rݑ|H4uBœw86'+/ 8?uaH<4,SrtP38Cf XC 88TP`+4^Fq >ҏ@rXٺ\ ޣ5[ N*ڂwn0ZfGn@wgt;ONL~? l[B;3;Գ2ZZ.E`1,~pwv%Yk_Dk%fh:oomNZ^c_|K%ǒZE6E# K+%-cX!B퍶Jn7 [U2߳7qh KI"]}hf29DPEq*5K^QV{,)+(E=)Ɩ+V>;>N&@$XgA{rBP{s;#5Rh461>D79stއػ$`~;ow]dr,:ܸ3 '[ߩ9狵hoPN* \!+ML^颬vLEҨ3:X|,@yuo DE=bV1=2PjD`TۤTH݆CKnX_B/ZZ5'#jvyՐT1 }WyeWO!>=Y\p"L@Ěm7 Vn@C$LMe|2 Kň cM3և<`b 2R0VQkrތObYbN}llgć$x#چl ;s2{~>(Q\,'9([#V): ! 2 ^tV25l0{b@R[՚8RnnG~M T,b7"qIǬԄ\h2;o21,(%"FJt(ʦm `j3T(# 66 %u݆o;WMbJ&%Ellg[EUhcP+CՎMd5i-o}jUWF?ŧSݤc_{ع/($e?ZןUu\Jd8>Y&Zix!߱ f?TPS\):;>Qcx I;:ȸ?{WFl]t¯Gyٙb \ E2cZl2$rgǑNlK-u,:MV \fGUL6)'D*MTTt9f]u.$5)ǎJߠ,&jMM2׸K%F9gjG)T޾JYț>$㚝>5leofW es3/WCQђnp{SU`cNjKXTe_/g|qu7&O <9qQ{KǤoBܣ1?}S`o9U?C`mf f_锘7?K%fUeH 1czFOOHMj콏އ${K=ZnV7ljNto ~wtrk#-^զ?~ JBB84\S@(s}NGG~=ϓq_X4>ͫ;????/73vljC_|sP3 gwi6hs4Вj1[)U2CwZlT-jC.[#d1Ñf)+~^`kS7V<~r_Ϯl٧NV:~8\ey_Zl=?ڕ/^3ƼOwq?!N~Hw'~ v[ϻFEޟ͉0GG=!hǡ|N&~K>>sӳ p#m5wuT)ړh) ˳qmT%ߪKӛvq5sͫ^'>a9<0rws;Gk1pѓcǺPnW8^g㵫- B}nW<$o5jzFr`^]Zys\UU薑y@xs}շş/5_e0Q:P˩qEa}jmadsrު\u^3>:F|6&,6燐6?48K}ncokQ*Kr5` *'&kJg̤Yu8. *p=)f4PX(68bX,.쩃*[5k:Ww\ Y/P*p0&TB1 0s!kh:ČZz6) j\k-6ڔ#wǠTcl*bU!jU$\ݹ;m&OffpFf1gĉ15ʇ|& a=ڵ4YPzlgmϱlu›Lw(Fzf]| ǘb1ri-&v͞DauQ#L`J!NX&"6B-muX /@ɔ|%gxtw5AY%:pmHTE(MQ5r-FiamOF8%Jл|MZwXscM$sa 诲|&ǬMdB)U> P !%HQź~!I\1[K՗j(m|.$$.,rOk;i\+bS%pԂ>tF5 ^7n6$Za2[ f=`^eXg DGo![(ѥd:mˌwhZ%;fl$RNi E)있H`t[B \ 4sXR31)&u&Z,J>һ7Ţ˳ca6VS6ycb"(h0EkU) \g,b؆~nHҨXe+Sbq N`3)^ *\EU]d4+Q Ҭ`I*[@ZƢng5`2j2i v%Ks|g]b3tW 9q3L ," POi0,`є45}s;Gb84[s6 :j,,pO:M;* r `pNB1 "8()8 ƍ%|65D&/ XB{2ݙISk92bܨP5p& ]kj{RR"ER<'-#Y̼\cN5^L.;j-8PLi, Z| $Fu{P&XC@ }zND:3@7Ӿ~ncE/f**늑QR x NZC_"d߹a9z5S-\ ^&岾f=f~ d-D nbja&am  . XY$G\t¾&РN}=GLRr 9A-AYJтk b .hZ=%2 l5j6Z+ ۀ8n2.LdA*X?Q\ yG:p 5YDtY'QO7$hI/ɼRSYtl,}e1+x-FD0xvDmж:T@@"P CwAKBʱa#L˶QE)e`7 0FO%(J PN)JuX:t K*$*-&3#F,mpZZv̓pFH"}Qo #8 2pFaIghw߹&q*ĝh"%FtNܑ'PR95 avPj4TȅL5V4M&3&@ BRmg/D.G}Ӄ[_ J>J$_b>AN @?I@`S{OW|[LKarE5yX^/ :޼-e#IB~Ѝ%Za\grRa'rjmT؆5Rk.FITz!cg#XT-O`B<9啠;,õ2Ƌ7F|Uc\v{%б}o M{j1N93 jOv|(1Z$vE(rN~VZQ Sz)NM8'3|- bqF.5/vݰ5)V `7y!>Nc(i4fs9ٜlNc61Әi4fs9ٜlNc61Әi4fs9ٜlNc61Әi4fs9ٜlNc61Әi4fs9ٜlNc61Әi4fs9ٜlNc1v} 昀QO 䘈v>tf1-昈.3}wGx`9$,sC-3#)1Tx!f\ sŝϸ9rqx%e]?16 'g By?'IOO}蓙sklr-~8Y?{H mLI~evOEjIwGIXdV3Ӭd0*"2"3/P06~.|]5=ny]C}ws-5o)&uY9f =Θ*<1dRLţ>ⱊ-iV,aKt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Kt Ke`Aeu{ǪyjƊᥰx ,$T 9 HLC5pŲ6 M8Q 'AowM0nM0eJZ "`a\$hIm#1:muORT$ZDZoofq`ON@Tn+}dN#v`Ĵ\e-ʇ坋\q %/4f.Tي1'g*(U}Fw\E_NO·))y4 4 HNʹGpw2g/JOb:[Z;)'vЁNK~F> 8yG,(ADE$4 L˽6Z`ekS}Ms>5ƺdʶ Ɓ dx@+HTЇ!Vz򤽠H I{)V9$fqh)e W:DLr=% 7%NbcM09l#]ދ!'=R';bt7^^nW%\|"x'G?`2CCuyoP[Eu~{Ѷ5b>\?ZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDEZDO˵EF~puQm.Z2WWoUH= )/5P\Y^:c!l|2hS4 !ꡨjHlЋxa_|gpȇ8t9xR\?쓒4Y )Awg$dPuk2D9β8 B L\n܂-~ a@M 7mt8wIs͔A4'8iI#Nq҈F4'8iI#Nq҈F4'8iI#Nq҈F4'8iI#Nq҈F4'8iI#Nq҈F4'8iI#NटsˎkeoV-VsR\G-YD$X7(?A[O ZjuJX7 `;sOP)Cuc2H7FFQ1Q݈~_T7"Y;f YLF8I)M=$.S㈁[iPAV骨Q9ꆘQ4[ DsOG.ÅBmOλΫj`W?d-0/G$ZYs`Vǽ&9yKi,JB'Jb^+t*λ.߅o}7-~r03NCϰ'EbwbSSyc_Zh4{CJjARrv|O}ɳdH.޷ BF{]fzgbMuT0"mp9(py\Ah8l՞IUSx*Wb q`:eAn8404q"LMx_ i!z}&}s[۱5,OAͲ۷k!M$Jɦ!Mީ(fī5C[&iFߞYA#9ix.*tȜD!$lKo ,Q0t?v)uEݍi;; K^s>q›R3 7Pe A"I˂-fQgiFtΐd^LUyeWוoB}}4zj$QާQ{bD4-a\`N͓FoCnc|Woӫnڑx}^Sۓ>$O]?۽snm6#-[t='|uΏ+9~gOCrٹ>GIoe+S9N~lۍ=fxL'cr&z׷|O*r%LkX4 ]t̐jmͮ1IdrLcj:m̜v[5WVUҥ>`c{j6RS vf[x,1T1 %1ݞqQŅj_lqHC ܯ{PW-(uӰS'Qbbt*j'lR!YFh,h X(<'dM-ԧ(]Q޵q1oOp4 ?}jv2pUއ`C!ٸ+uW&-J\l Tbc`ا=Z~P"4oodɨ6*DKCu[jS12VSxdXec!'uc!' _5kTmy[2Vv",'Д&')QmvVY^t4'^'B.gI~EːFlAc(Ş …gɕDd5Qʅ8hS9bWSxi*1O&eC6)Fm`YvoM {`"i #^SY~Jp]9 |l)RiT@ !4ĚB&BfrKDP2 N1C*Z3zO~ fm# ˇ!(q $5KN",?N)gp[\_@-2p>Lu9%^HϠB-xqGG2,K3E`JZj-)Ӊ{u] %Y̩1̣\0U  DM$ y2C9.40~NJ0ȴ>6ox܌;ps5>|}FQfvz;lڿ6=e-,fL@4 ,&3,gLLB䕕{(1"TŖO '0* F3EUT aDZLRf&0bdxuQܟ)GN 6~]@&.{G0(:xJŬ6x?la/XaAr#KYp!ˑ,0:B3+6C/O3//#Ѓg˜ҐS00@vEJ) bK1= 9>כSC9c:MzkTPГPDqG4&WutgM-:|oQ#q(1ծ0xw(E֤7<Dzң}Mg^?:!x1ٜϝ"sQ g#"hrQ5.;Nh4|V?&6ڶv:?%;2x=P'J3^%)XH%F+JjDyg\d P wSVR},T>qʬW}!QGIGk (q $A$BPW(EywT[^ceJ D}BBW cw 9T1D+A T;aPS+mwE,դS)$R$yDb+ZXgsY4AG/(4I0ASS+S8לGReh3I!IZ!2eԒH RRI#UQOHȁҎq|<֞ :sٰm4_L^+1Y^~y\ܘǹꓝZ|<8(FXǩ(PUWBplQcJ&ѓy&ON.\yec_Ô—tz9N6Q_yϫ+%ˍfqkvr߭ck/QJ=J&>oyw[XPG_ϫCަ'9)8 _"$^&.LZ_&i7/?zeRF!+S_q3MM PVDTu@Dᢶ @8e] 11lT1l1ld1lzŽ#9 CpE)Vq@bLױ%<%/B8DNfc^->a{YԢ6D-!gBFnx⨳Doth}?)Lm;7.m[lyal Б1:Zy(-|q v M$hy8)|8y!( a> p TpN[1&PKiUWghǣv`p (jܫ0O\I>5|jL3aw?(H[/zUrHAz2<^a1PשS]fBoEBS$ 56 ]a9moo(@& 'ɩ9 0}$D;  #/bN0Tб)lt 6-*l*lxYAK$ʨ` 'M ub!z*YQew25 -tbۄh1x'E>p=5o.z}MpYϠsA3ALF/`2|>}J=}B)a?v;/nn կo"9Ay{bz; nm=v]? g{ Wv 1Vk h,+}"x7r㷊a\]gλ6^^ @'kc #>7m}I@ Vy%.⬽lĥʡ;yQ~,ATmA)3Y`jDzM4gcg}q?}HBqk%a}g~i 'Y%)ow;J)h Ť ZsB7{/q,!DtoHh-P*FVG#$K3J1&*yb٭q@ǣ4U]|vչWWj8Er+:˙\/moY> ?M0Gn=D"1# H V "V17z֟Q` \l`q4hq9 Q'=GIet7;U6F)wv(A.xȗD Dy &rL6;gO83t~ԯÂ~/\hǙ]䎣Z|᚛|L" ;/ mvG.ףi'ZV-V&Q^%U_bn7?7# U5\"rw§]\<8*B!Hk2["l tϬޫs6:ͭQIji Dtq,X rn|Ǵ$,P*W 76H WƁ>PBZA=N# Zm)qvm?xA_NMd$$}# |Kx?l'Ѣ}DMKE6p(}pZu䜆D" %cc2ȬѴQ]!y.46FP'9(|j6V;E!2aA)a]iL=S!H4EqMAIGW4:h0fe@UI [Sl$p? d,ߕ& 7 CCՂHJ;OuRQNT ".VdX 6 :#v*P_־Brťi9@]lu: X=?vNw(Ww絒**8SZ?ƥ K~0bo0em.G!I1V\9:-{gn8z.?%eMl-7$?#OXOWXn`>/淯[-<6|].=)ؖѕl[Ҏc{` f{7g 6n&h hxL1a[[7;Up&Q 89Xp$lh*YMGW] G15&E SQ7/siおJ,r0 ]L6ĖܒI맨G";T|Ӻ彬:dEɤktc$8 #Gg +3Dz"qXdBXvl@K&uBPLH!:Ȗuv5Bl}Yw(,xy|-e3fVr(.Bc\;.>d@ \씱 ER=,BR)Ֆ.vLJv1pq,x:C<}k,fW%ǧ~TuI4~\L%<]]̙-kF?y !!hfbӠ)P O;|#;kV){){Q;ew5oOb@)iHRhM.2JddㅅB'V< It"c zDN t&+\^?k\]9&?15+' z7]{usm5|0o Y&^to[)R'ğ߳G2->o,M-Pf}<{aK`-U6\6O'K $ 8OkݫLgk>w?u}\Zc7TCꊮ[gv'&i;o7vwi?ܹ]cx=97[$ :̊SƩr9b~@_ 9rLDf[߀o@W.ߢLIBĐa+l1"a>ii|ėg2UŻr`!kl Bg ^ ^JF^@[g5'C$UHUrQ[7d&P9"̝u Ik!P*Xwƒb4},[m6E!m*R+P´-wmyp.JAQ 8p*ed * BV ɆfBP^fP85y$6d-~֬;{¯>7[!C"(5ف$1qЫj id%}ON$ۅZٿPOl9$"XQĝK\A0He"@&JLB ͼ$TՊ7=Nq7:] 3d)} (F$IfUf) 'i圃W8<߼Kx380\4?L&̀Z)ֿLT $$.:4!d+h. S(Ѣ{[Y^@Gt|eYPIcou/@ix3۽Wý(Ƒowv񉌍JAr*dX\2Ykl-DM\'$l'4;Z9F2KѬVg@`pس)d*}CP?! ]?[2l5[#=5r&&2h6@^f|٧KaStrٍC]RQ5[-TYKIs)'|?_-ݬ>Kk۬ ߩQ=3*PJ5JuzRRb/JUNI߾P)Lhd` PDHJzz$q >"Xs2ώszx r@/upB IRf^ya;gM-qdhW.E' ^xuU#øM!ÏTRQ%ˀ] ZNj0kYc@ZE:y;*Ajn'_rmd[)*[Q$].sSp#_˧x'_4VW--hg\Xe_M-'Lknקt2^qK%˚{6MO/&e:ʳiMiŖnXgty}N_:izy?lk~q6:CzqQ7g>ot8_2"kʭ'y)׿m~4Frpu糚"7g7oë8 wRΦti^tMU)|oxV<>jopN|z/Xc7m[*5y&ar?h&P|3LK9b~S}/;u2ITik.ӌ3^'KsUB/+wcY0~aN.nGӺ͖3g:w~.BMyGsݭe6o_s'] WK2[^潣Zi4]CK!ꆋ&v.l-V0 ͡+47QgUoQ^!HDA@ebΠ A\%HTd|> -_^fP[[5].kT@LUer c]Jmec`NL-E^XΆQG6%Tc;Kt5䅣Ǫ1MŇuJU.DGJt6W|Qyo|3O^E Z0y;pP:=PIQ`?nҧo={ D9)> I+.7LRV\J]m jKfmLYLi= rv4<^\T t%+!Ls)CQ`NXAR2GՋy-deG7I%HiګJt1br-Rftl[{mTZSN|7-P,JDwŢB@ТХY^D~8N1Y"'?g˨i mk,'?:m;NVَWt}`Lp8ږCQ11 eng ח4O,14gL¢/k>7,5PZk?ܹ]v=3GnkrbܱnܪmvrBov{G@wq>j wJKy>u .)R<;a)Wإl"o0"𿘯rUn_j9f1J ;:)\hT}پ*j0ڲwв,$ aT"dP_D& џdLeҶm8e%,IC\>B d2!%yuqVS7~[V'׋I;Ym.2cF >aL&F9?y]3m&)*xFȬ\!D,$=ޡVv(:0(6e`;>2HL5!A"`PɖbR ^S~Պ??Lbo55H& ($,y^ SCl%9ng HܑY)yyw9QJ2Dfb1Ժ\l+#~#%8Lڙ.&󺋚|3B5S mJ<=D狔s4 L2 S.Yd:h&V+88)՜,z1Jdc^k^z67Ĺĝ?{?_~'ra%r.tD1 gɽ4t&&[v.rX~Z+h,$Ӹ,:||qx4[=`VwN;g,bx/Aw mo0\1|ƃo'gYي?៞ = O.< O.< O-'JR[sSJ,j +mQڊܻڣVSTx#FT4E_-Vc}`rflWHT YJʃo~OA V5BN2Q:k2Y\h 麉ҭbNOҷ= wo<1xc(ϑ9#l UTl8ch\b81{Ob߸VX֙sp-'9k_C}t v-YRQl=Ťrj"'ߊjSK*+O|eՊ R>X5i孯F)e(Vg# !avԥ&$'Q$VfQzұ}ұVs9.ts ^zh2͠i;p7ہF٥Ž gǵǦ`^8TS̀!%d;h[y/7<ËLS K;ڈ.cF#9%_, 4o۽4ew =ܰhmoo乑ׇaЃ7jU = vՏ@LrC &j—B ;6& U@O>0/¢n],k6|<_lRH>h7j%#:ezc[~'oyGi.'.dQKAbn5s,(:M }I9 9Jh<6 *hpڴh MY[&=5"A^Z[^xwzvy)=XR}݌Jy6K|g\/);@,yC4^f3 f(*w+mX ]2`ZX{oMX&Žiuz]Y@NN{/Vͮ ]k|ql_1mf#LXOrK=|43&==Džm~ 'oc~:f<~N='W[;w=:,y44a }6\ft\ا!]޻Ee<N]^?GC02i=H[azs2N/ld;nzj1=zr|hVw7o\}xc:Yꖎ˭ڧX.WMP&yy9}>}aFW6o۫ kNyۖS~zvTR/ ƺΣ.ʰ^s;cь7e / / / /0/;2c.I0u&QB نZkt|o51.hom?S >9@xDO<#CG^?|Ƣ6΍ܩf>Ԑĉ@c2$ǸvkfhhT3cM΀# rM1Xb"|l[,!GmIʡmx^'G#Z]#(!A`7ֲ`Dv4ʟp87Y 'Ѕ$Y֗Zֆ_#8~7/oڢ_qcaeLt hD-mv.5Y[lUV)]V6j;jЖIsi$y}<;ـ'.bHH׹:*I訪Z^4I"WXUЃ2&l^px0*xr.Puz6o@Ɖzri})9!Asu!SۃN9`de0X6ICj%s5Iz=Z=b/ =ٚNUD0UoT)M3Zi+j'M~3ֻ9ၻ+ ѺC IUrI/Y%TvE+R"꓁td8y98}X[M 2'nPwغ]Oc%5#{T\|X2Z&!( h1čS 6h,I툈lgehd|ݦCwV!}62LngiUnڢ$[-(':k>eNO=.e>hGPAoqb06˯8 HBv&䭲db'eE6?cX\4وX0Cdɲ:`Ra6 4 MlQqz%K(E TؔzH .C\DkS!tCj9> :~M;O?g}]ǖoFvq_I.:=V i_5j&BؤԈMJ 5[F <11r1>a2t(z@]ye h䉾M^ ;FD#FDFĘS)V:XDF5Ɔ U֊ H;AiShbޣ@#p?d`59󯗓`V3evR FFN!@R2EP@k&bR3-sJb vVratj7ހ".Wz%=LE,plm6n~2=K3V3|H+HE%8G¤t,)cEMZ م-*NrEjt(WFLDQqTGuv%.d'iU7VKB.ւUbriTYki :pNi}BnOavO`:@x/w\`k;!`q)h.m~wwl`4C &σɶb:;>Qc:^z:E-@́ZܞA`6r5-Jeor+ٚ( Bevu1kBT:5Č6ǍU@g;^}@O+v}n1>]9c.rRu|&}nh6כfyxLxuGxՀ'|]+PN9^[NUY29kRڥ͚ c8[d6owzo݅}GpXIQ̀u"HqHbG<3"A@H$#RA2_?r@!U=($X8!A $DAH 8e{dr:r#y@ﴄiS;Pn|kB9a:^B<[t򁁇@p (s*.L2q51ʺ:D<z!5 $A4`&RgaJM wJyn0ٵ'{/ k(otHyVko8 #8A pjQ&A[i"]> d EZFQߞM 4L=K}xE:0a\xƨ?}p@ ##xP(S#V2K`m*:]j2Cѕy< ٹaBz.@}뱯ԭ2#xiK-{8 n6.ƓIJ^Z<*Sv*eq:>R0 'QO/´8&%j!C(6f897xi^f</ Yq\7OL>47U ]EC2WpzFyUfiMbRahOCbHd5yU'_irԜ f؍/Pzr-el0H+Xwy YexrnJ𶣫}Sh1 Ri^T̵5M-UѸ%Io$}]X>3Ry\Itx3C[w]2MMqxw(}]y0o~Ue`V<۷LL=PM久l&p;4wytS=TL74eVw4-:6=&DUW!&zh^bm7p}D0M#4OGZbz2H$ w^(.6VFNE.P >&xɘ>DZz =X[Oxv@ *]0# ħR".TN`{cϥ'Eq^@/qgyf]7ش((~Ǫ]Kn|JzO:~J*'Շ4Ӄ)z h>;%ԧ(c酣 *wT5yq6j7" nu_41:E/a;r~}˭EznG%SarQR& )ѥ* 9/~/VGfaYE2m0%³UZ7z:m9 O _((\w?|qc`s09sH[GSbeP[2}rG\,IQN)93Z膉V+ˈ>lY,cg-k 9Zr;WÃ&m4D'|om>X[I0ӏ$?[ZIEI- *j{Py,=A(GpSN \%ju*zpŰKl <8r@_\G`*VD-tTݨLc2[vNLɮUmʌr.fF3kTF1˃w4ףw.OdY̛̺qŠOiJzvנK-sC6lJqHɩbcp}ŀ`b@@h_lD}g&lo%*у-tl1(n+m1J[ǻ,뇃ܿ]fA_']2o̭ Ckꚫ$=R{:fY媎tҵE X.ܺ[~-`ta&ׯZ+." ?Cdʼ!%x]!Zv;o kGR^46ߏ/~ҹh&wmZ]N娶@VCf߂WJ6Uj!7ש5rgtvh@jXI {cճMVL[.mGf ]>O}m=:pXUOabɍJmy94'6RҞy~+9i iJ^$56[2p oj/]#`FKٍ;9poodU}9;N &t &,U"WM*PK)uJT2y'WkW$y/_eUXNu rKI_$7Q|mHy]#Pp/VjuY ZѰ`e,z+kbNU"L` {틪OԊWJ))t޵ᔖռYbёˉOW [zGR'LOo^qpZ#)3sn=+PL _VE *4!kG XZ &D ŕ6$~{=۰(V0>b{1,_1nvz$eؗv*RsjHN_ a="~=QD2,램Bl3z΃\Ty~6̚5(^0/SǍX 9ϩfN+*ZS (WKMacڥx@ <a8z<빦Pot)D*P~AWeKyۋDگk>=<-rGs Jge' #. [,͙X)`;!h9C@2m\y8Y^G*@rHd 3âʝV|t`f' L]k'[WK= z3OWXFb1Ȭ,bd/b['z>Uwq?EP !`1JBfb5;fXH޷#lmMfi}VqݛA}j[chd^iq28R$#_01[,P2VVShj3dUVbk=d*oC6'硺1)Ϡ|( ,6LruE߶ڼ}?Z,D+,n)4oؒ-ـ<5}}-7O'ϝOs;W=kϣ z1~i>N?^ c<7ۂZn- E[ZXv 1vo`%RI"Ia#lȁ^{9^tGPL*Z)]Рl5IEc,"Mk#"F*V]pʸV@8+ 9(ku.A%e1K^cI h9>-+ûJ{,ge\0pݦwmkQ-K6QV;4BHȞ8k s/p9ձO눞* m@MQ YMѳ(ʸ9j4/DQB  ] 18ޅAbd"ȯ3 ֨M|?͆}R&x}(Olj)ƬNbNe=i-sc*I$߆ _!)i5-4>֊4 T9Y#02Jq)%%72pzWnG}oE8P*8rz_#ʻ#;c1<Și|^*gѷ!fY 0kRt4B(T>{?am\ǽyo{o}4M3K(/0-tRբ½VK. _qxtƧ9 ټԎ"@BKc,C>Cb2/6d^mHEt|9DX 2kDEGI:(82&ɥlCvG70'p"|K&*o#ї[dǰ=Nŋ?.ˑvn*<*g1{o#On+N./'E K̏_XHx {cݔyu BGDތy|'h!R^e> ݽ_t~^9遫?\—[Ǔv,t!:mcL`g` O?O < * ˞E%bgP`i q= )svQZhABKipd QxHqu uFZBeePUVƁʚb4@bcm6ݞ䴪*x* L,]M}yv7g]V/_AY۔ד)`u"FtDU F:9Q([Fec,z*Ay(tw݀wzA1xӂ;(Α#v8,8aAMCV^ggr|.wI-ta2*/-CfcNQj0! + .PR.CqJ1G3Tœw'~(\Ѥp[ȋ-ֻ-H¾4%qzdNunaa^@͇cG7#~-ft|y ZTPi;Y쀍sNVYEy yhK{hITMf#󑼩g>ڤdT5%Pgݖ`GN .V6O޹]M?/pdwG {Cwc0wâK[Q||K ; J >{-Dj5NCJ!)Yd%`2$i [,)mQF=PT<*_(J2hud*(bkV٭⬯@kጲ٫կ{FO(ٯYGdRPhb8f4tLU1&;ty1'.zF vMzCx 3j# Aƀ9X>%"jv|Tr;}RWsxg|D+~ JY npҀHכ:ݫQ#foQ$ˈwPvq:\]v4jo[<4+Ƌw|xGܤt&Ĩ(G$j* a69Rv-Smru^K:ZOFzͅ7Sn+pNg;\Њg\|-RM(&tg=kHg 7Qv$t"tɤdJ|u(XF)س2j9[Yg$2db. 5yKjp<F ql&d޼'G~e߷lf}y~*U@mi~L䙂4}ĒWw.cإ̺SQv2@yrES)I}N ȫOCTt@ Y$ϕ(' 5VI-6E"kլM@jdL)H+%Šc000͆g^V}BF>mדbJ sE` dR\/;cI 틓1V$eh[6)(5ld)ST( 8vl299'<.j8{\MZ KiZ/MMUk'ʍ뾆K6owX՝erޟo䇫=Ugs2[_r҉Y="X~Ay  (Y! u'% ن +bFZYx%%QOh0Bo\.UKVG) Fel6ݖVi`J2h hk lAm͈*2x]pI_4䏋+"Rɡ` “HvKeB* \l_CubVl.ҪMmZHS!;>d R|k.Vmd\̓AV[6rJno^j 0Q+b`u@STmSZ.bF{Ȍ %ۚL_ )#( \j-;b`fm{~ULSQ5jE&֣dPjrX K(^ Pc ΛľaԘ Z+2IRL =et|mTtt5͆GEK.κrYg3)y]vQvq$^TXtJ!(ҐʘD)A&zK+G!oxx*t<6O> Iq=eG+q௙ X?蕒7я,1䦿r2J,y~2J~ПuzD s |ӞS6uRVzdpQ._+jb:}ͳVBK}onv׊\M'r_-ʆx Xe"?ԬGgыgwh~4ILͻ7ƛeJ9oulO3%o ( j卄َx7^pq]о}~ϲ[ߛ9~;iR }U6zP/l ȷXO$_^sp<ߜm:%/}a~rk^(HG)qrCzFE%:F;}٧mǽL>5^k+N.8AZW|]Yo#G+%}h,vmώ13 ev #O#YTw7X$gjC`eeE^_q{6.r3nRm8*]..>Tx xz/ )|;E{N\ć rk1sߍ&^[@+EEMn;:O+լHKp3"5SDcP*$%&>%sbi1g;J@Ƒ&fy \lOӧnBkyJR3s~sմ{^/hfjO5 cE|񼉜ג{<4c\hU##Ҽ"3?Dgw$ 3 ڻPBwQ}Yp9`eKHHk,\cq [r_u0xbqŦtN'h;^׮sOIib{hEG{+@XIQ`EĎxgXEƒ!HɤVhxE!L`S,IA\ˠvQ"Q e s V lYN,5@$[XNIfiH?ɻ 2Ru51v@ՔiV~ҬGbIITTqaBd[#ӭ#h(~ ؔ϶ {BO-!tQ l0T DHĀ8T P0p4x"O#ȓuzi-i!pܜ9.~+'Q:霑ְ@JG A pjQv7gv,yբfn48$ehP[)EM3]/ L KMxZeU,OwQF_9 U 1(avR05E|jDJf)q0Si>QwT W]TւQ]\qTν7 kKw#~?eTG0ޱFZS#G!+/P| QQ|o;q8xa~Mz賂 PB0{\)"W޽˯K&.WOE;7OwC rPZpY'8Ŗ¤ %^4qA")ͤ<ΠfenPQHko)mbYG|/x4OP.wmo)4m\Eiu4Eɪ^*\O4V8I=7kC6Q)aj:C߽u|[78*~[44@@O/¨uXp)qq|w>{wGtԹPR*򬐨UC~2kd< K4XΫ˂d; XAe{MTiePoxޔNNg^V|hBUl=w5T} FkBˆ;)cd,HP\*m2\>&xɘn-- 5o9{5t<N( K.PI{S1 XqJ+YE{RhAK3Ż,s $5l81Fzc9';ߒٓѧ ,Vi-uoۃz hk <#*ŠVqwWp&+9` qvp2e>Ya/q1y}{IK8KI)Y%E!@$+`.OUř^wY, ?$EPS !<[Bˢˎ7Oxq'O廉M50,Oݬ]0x]htƕN Lj3N¤?|oSglxy@bYֺy8i8wJRr[+NmC%V-Fi|QjfD%EpƤXH-O|OcqI! 0EbT$.F%i8$IIIAU3!%1KXaI\z1v$ UR \ %$=.$. fmd\Uh 0Ϧ˩myn^ZHFX[No qQك)&ɪ-Ʌ-1շwo7' $2/s$%o}$L+ya&4%xN 9:+;ľ m01T,S2:jiyQN3}pYBƹ`1X6>>ҨO.6^Bc@7JahUa/}|E2da7eWo?78 Ō93˭":F jQƕ؎ኜ,IQF)3Zd&zG{,jG #[L9ycg-@2Fϴ,sJÃ&w(SX];{KY 1$-;{F[d4+-\%qJҊ')j꫁+Гk$B>vvҲNJqfpE+Cd:@Ekfy{IR&ǝS9F[Iأ[1Tʋ5P=(UCaɧqöPffO>2Ȁf+맑61JCM=糚 @^xa11tL?~8^9s/ԇ Me.\'=}ae1tFQAlE3jlI >ܺOQ0|;p\ }4GA&Mpk!Ų3:5*Cz  ҮZ~`;:TײݴUѠq:FDZHh)C!D Q-J~0⬏HQ_{>MMnj;Z֙KE`1@YWkrjd>)EHZB6[* b8ɍƢS>"0eցEN4h*x J$8N݃4.)b A4rQj.Q(f(^9k̂6ӊdcsέ ^־/b[m=Ši9!"g\-{1;Mhòа*(5K28wê R5Cr)82@WNz]Ӻ>kؖ'\D"{T@QM@a*^"T[1̩ڝ{kWImwsҍ<_ߴbÍc~m{{{!qgN,tnXbX}~@ L,Е /mdD|S;JEvD8*cɩڤp \XZ @P{Ҫ07#ZN.'5{'4TVX D"(݊h(Xb b٫OJOU6ʟn*L&r"ɕܻd/q6T>/Hvsgv_Erd%9/wx9r= &)$t.ʚlhFLUuF%9P^ES]c)jJx{R6=B2PjFE\.T[UM=BqnXM3BbU>+(Ery՘e{VOX|\u9λ_j@gg_N_9bX)hvCI T2Łح**lq۝A54gHƚև!y(DR0VQ5sU9` ylaP֝Q[O=1{˅4jXhdT[AB6GLrPI]:('tCfAl9Vj`BR]Q§.6Mpo6` "vӏCtFD3!℈iǬԄS9W(*LD1+feXEnYl\h@WD؁QMԆ3^2X&-:#b7qhIS'Ŧޘ:iɡqN8}oTPF ZvlJUԀ20xZPρcnq(xx`3@Xįb5F}ApSKE?F|ަOWU^ũ~?|Ng[՜&_y<\I]XJ]g꥝- qAed[ @9 ]z3p,m_7)rSTKlXݧա寿_ſ[q l> 7W |釿_ͩ=Q|$co~IHj~-|Apy~;#(ݿLג =9] wJ2 o{.tndB2C!PS\Z[ :g?Sxvy; ]ű8cqَ-R2Dla:db*w5c7렕L,[aҬS.Ǭ: TO HLUag+\S+xk|bbn=M_vsuYl\/o/_O\mپI`V$fr_Ƚ!|u|ͱupqF<%x?O= Fqp  ~;&z\2 ZeكdOdC!$p()º*५M,t9EcAw-k];=xŎ&;Zv_nk˞Ȫ&2l?E#9a.j-i-wETs)Q؟Z%XĐN\媒I;ly؈Y׺N !:8e=ݕ~qsM? pū6oE~ϲtQ[&e|W]wUBittjvR!{5恬(= 5;q#ov2LNv4pb+zS&lH0!YPJ#p&mv4vy&IpL)* !j!|V{tgcnT崊;t*S?f?~9_}6)>=0)6`,]>5eWS*_XoǓvvЯƂ';;.@ƺY+ |]T=Z+ŢH:K&bi}T%5>*-E T0 0!OlEκ-귯̖:;OQX$ʀmjFr.GW%tdMFǖ+\E^4@O0" =pSq&WUT*DXcP%'rY|`"VZYjuZl&? 6*zJQɉ1ֱ$*,ΗҤ2i6RS4pt_^]%NQ:gs?̀ZCkBT &K[뺔+Y]rD2` U[ٮ<`)_KE5Y0ņVNvzkoe X}AJ^d㚌u7 9::V-.^-Ělw2, r ?K"BQ6 S X"|D7V\J5 ű2P4@ULQ0%EQM`퀷ArM1 >/~/&_~㬬"GWqE>]qlyMG?]..ZZGu>{6ZSdv|z^g<_Wjخ!Gh1-7TD6 fg\ӓśp;N2o ̿- V/rO|p0F?B0)n?9q9b>Isrz*ɳVlk^yyɺ},ʷ[v<.x[Sn={8T֣ s |TӲMwUh| GSB7\ۓMnzh}69υ^:~?Ov2ig-_w*]~=y6|۲jBWQ]O/,Ӧ:EXsDksZWays=F.hp_"nv1e?]_JMp縹gm\ x]Nr=V0CWhJ_xLIgHծ GEQ2ArFJ: Ien^6JJKŸ"":mugܵ9TɵW5Um^r $%ȥ/gS>40דaT5:ƁЦsO GUc.i=iʈ~hXDkhJ4`B$7gSk j몔T3T*xbjųxֈW\oS$8[< !sD,VڷNO:41kcTŁ @̐E@k [*!Z*лNO/qLF1 lioqw|W+tU:.#$s¡mNѥr9:E'Q;T,7u9pkִ-t9m}&4< Mj@ E1h"k?BRZd>o 3u&4&j](ZRU-VWZ45GmRq-me)n{i|~yE9BdNԭ`Wa1ؓR^RGSJmΌ6>iDt߮zԃǶ{5]>Ԁr=uH1T)b+4NGjdr @(O+>5XP2VٍRmוqRNe,6Z&x]=(l ץz reJ.Xojm>?G7 &a_amQ=`QҦ+R^Nq',Ҿ*bl jMG[(͝m^JiQERi 7ō./Gb5V ʬ5};Jk/ Kgaܹ k( +r7QO_g{0^Vp/E iiQrH60y0io!Awo{yEz=x9=Z⻻-0+[TGjTAI=i3?vPyY/7d'uy?eǂ?HD4^@x9M5 PaE'vTXKMx6fQ=4~O [Oу]+j@(Ed?._HA,rfdӬX*~vc~ZOnW-l+/>y>)_|/)5j2MI?n;M̕V醓I9I0]$m1w3qHJ1zrr'ilː|}ah+9'ACIE~?w*e":镱ɫy` zv b7Řpn:_z;|mWέ"qTvVe-E_>+[l62,jq6*N'JRxٛez٢Oڻu5gXcu4/^GנI }דo4=H?YRl1uֿ``ix l0EN93XV8d˥]6llXe^Me6/!͹1Z3:!!M5!-bQ*)]@540~֓r 8M6neZGEdQ0A[R.k%Gƺױû,Ԥ.HH|ԟb} MV-:"ST/b,+֢H cZ%ZEjH-"҄N%JW f&&mUBUGWGHWB!DWX\uЪƫ+(=;:2*?Dd8M['AM9/r^oYBb#f8큭-Gp_eѴCmBV e=O1%Oewr+!dhC\Npyk+Z)N &M#M+EJ̦A˨hlbގ,3^cIΆiM(l4 ;# J տ S(,O׼&ņ!EsS> mJ#BpX'+jޖRL&DvZS֪1/̵n ]\Ah[*南 ef;j;]-#h/NW;0]Wpy7O=S}]R%tE;ڷ2)-+WWe4%g#+Et*BW Sg2FU ƒ\BWv*䲣#+mRW3ۣ\uh1MH\,BW -M$`AZCW mV7%E](U\7oy 0aCT*M)RJn\^Clg߁}ɜ3O1jO)$/_[;o_nJ!SDu%<@rFϵ,wJÃ&<=BOFšl\qXtYR>dA-+kZv\2!} 3tnSXLjAOL0Eɓ'5s<Z%](#9M8Xܧ&*eK(EGWb[6=;ED#SwK\³vC?7n(uvc;jߦ [DW BW(Mю;J~(]`ik*-th"M(U\=t֨V}F-G⋖PH3»}FX5 $g!Rh!PXa}F~}aEڔOkTHKi[tRYgI"UK[CW -k<]%fCro߽ȬI?yk 㹋;[T%!ot@ooՓZWq0QȆĆl0x8u-@q&?wߛ~>gay./,dIo1)P+>wCx ~DD큨`Y5ګ~tFK? UX{v7ٙR8NrZ@6Zʂ!2HԆ %%e LB]&ps(joFI*N/Tj O) 6 B7)rhXőp.uXE0pBR!={eJOY<]LW IL)qOKFMYlIzt0> Vztz(,&(<5Lk3C Y]OS!=: Hו<`Ó[외9)0n.5~0>[%9mE֖-S?Ӟ[g h2WfoPbDG8[I,sGob9ksrRbr-ᯠb.{Xc!fQŘS$Ėsg"N%XDaFp3"ا1RR%nb&9 Q Ƒc<' &plnx>^y*vht=,%9߿C +{| <2A=95z3J5VEM='sC$x쒝Z):ywAKS8֛}Gtd\[*VRf1/Q\h43ou QCAE$}n$v9sZD,'a!Y<͢ƽ=F"Eϰyk0e\$Zz)b!'{LY3%NJUitJժ)Pʍ"6 Ȱ, Ƹ#aREbXF :MRjF)⋿9wVёN?'`֥Լ/>:BSC/k]@3VӢ'qIE#9r΍ 4'U9O |"); 5TS6B~J(!`Rx][e9 NQO0bQ,!+냉KMbL ` Xy$RDkra_G_6I|'gWb}|.b{] ݞ|nfӋJ9-(_>? C1$D}R!4q%)V[ky$?[<"^c^;ÃA%i&$nݑCZERTR*uЧ*x/3"Q%q -8!$mr*E}> lNȍB-*Kg?g}v$ϗ$.2љe39x+99.[8,f\3_N-syY;NrnoyH" Ps=uߓ;uF_F7ѻ9t3f7_OF64Yiz_|7xLf[ o}u=4hu螉WAi2*[\zռ:Vwc^{wほfw_CKշ_ =7I\W? WvbDO<)? ى6syBm';zFV[⫮tiw.PA=Y%h&ӒD'1; 8iS ٧@ hc =~I޸:\NO,ʲˡ(+Q?^ Vjfq&h]B%xw)hWjz ztCN%s"` ^_](Lgt! OJC1?hk OLAgK|10"xPɁ~hW S W/9u&1muxF{ӮOT'5Le2R yovV&b6 S4Xn5moƂ;;@B`gh΃\Ƭ12_a=q=/s,!(B;IPN 6$YY`Vd>INVjhg]}̬U.E}1mGBTI4[EzRHIp" r"'~0&p#p?D2QFHDI3,*pK/QxgjUsb<'`GJ45g`Y:bY@DF8.Od>Qb JRN Q8iqsLhG$dJ+D*(BtچJq, J J툐l7ԔR}1j*rƷ$=d#CDijYZ WMd޵s+IYGȼ&w0zq60}jruaMЛyYX3]hH٤uD @Y$ &l E"A7k[D*.Hi`Er 46NqZ-?jO5JWS9Uț!7kd|MCs$SbQϒ|~J[z 9*Y)gS*DTFktOһ1ML,h11kd~zl=ܸ4qMit8O6F.dq{DSKޏF?M>C&WIO\LlZ㞖 Q{MӥO~]bͽUr*FRUtY=ćV3u9s*kRK7ԯ\,Q w7366z~,S^>ޕ=*-QbYzxD d2%z2vr-dP7h( = 4{koo'9;,lA rID./VeȘLl8f>0k"T5E{̲+Vҿ/ Eh0B*i-pU0*j5q80=9䖦 }86'GǦ)YKIUQH+_`[Zk#"6OznYVv+`9i!)xkkc\[21;0E[*j%r3Tmd&~d GbduP O.ngx`)yXy/48O?/o%]F, LqOIy-JHŜD&oMXU)c(d BRkFhHRɬ sHck*%~Iܢ`kiDZ+6P{`wC7K ^S~02QzLdB!}nJkV`Vq L!bA&CXG G(K-dv0t5qaK꧷i6c_~2"DܥNNB![B33VEd'Ⲋ"k;PidtI䎌h@3>`!i2J#-Ң#p%2"Vg?"~H=|݃O{մX\ʸ(\pqK37det1d2rNb-5SZY }dža58+gs#Oaucl5ǚяG1Lv)rN>Yn6_LxDzzH 9膼Ilb#ʐspx;7_kף׮*I^l&` ԃJE#b6 Al *y72 -Gh=OG &3Byu>%@GcKQ\ہ%;ܧFGVO[|^l[)(<]&t6,>K{AX@>{vz߁Yl;1_UWu V3okze MDUp.Bs~:W5[ӔjOA4zЉTEw\18GRߑ5F qMVD %vDL"7ڊ&i0@<'?[0Բ$T^~ ^.J}:m:2}mKۏ>^/v:=&_\_sgG 4eBc`E % a&3 뫇 5"TBk: Z5313&IA0% 1ʘa7!"9 @0Myse;hiVfomG d_< ViL$N[d mLJ-|:<" =|kCF2t6 H&P A.]O kKvVi`R{ց')b}gomuDglFw&BϢ. Gl`լnc}0cZtG ǡ8CzST] d%^ z3m,q@ZC#S`@Q`gl4J%: %$7J 0~TYZ~-lHi{ 26E0#|-4:2Ftnj3ʎq8ŴSr/m.4o<~]ø4lΧg<}LYeNY' ).&NǗ\p됫vP[_]r?~3%3R!LiYOce([O'x'h ,}*1wwҜ_..n0xgBΧ[RflYc?GU:pO;%iFE,P a]g摛M󨌦+! u_*VӍc] *Ǫ>2GgI' ѯdD~)e`=%y)Lgj}??Vw!ޕÁ?Ll{8}~ƞ\L ~,;7W;&d<)SRIk7k..Zhg9b|ium<ruTwK6muaX4Ej9k͈{f`dܱv~v7ΥP6%&5`c 5$)5Wo$-.hZg2R!|q Ixēsz^ `CIZ)v.#b7775]˲f5ZU:e(1ΜH\LPQPN+MҜA=5s0}`CDc OרR(bٗ%4F) xe\Rj|08q\ |-:Zا ~:nqtgi:Z_zor__/v[F¯^o%!&{ڴ2Җ}RlQӺ:x KG#m?uʱ͏NK#h>zVuъh"ANQ7*Wr@-r_K!bW=R۩ύ7>|aUbus@Y^.n\O"# /O?N^ >ygm~|{yEӱ?onwJ=~pIkصk m_;bmc=Dso3)@ƙRx=fWkZ]7H.R2"Ke,UXf,NX;KOZU/y,ײoP!}h9>HƉJ%05\Us蠜5&icنX 7'r٨"Iaʶr aYhLg8;I_lֶ^(]?p2U@ p-Ny;7C 6 pIT\pӿ?} ;SZ6S͒*YDWddC1?uyoC6O>ɲF;eO![+V)$zY5%eɥpv&e2qpX4d*N\vMG30W4Sф`< U pYOfgGkG Gq]M_zU;{Uǡϴ$8;s+]N5B+SN:\BWcRLWgHWVnBt4!u0hōұ y'DWpp+eBWUGie3+y;!`d5֎:JǮΑVAT zj5֎(w*·X[㦤Z ]u~23m=] 3]]Gomd<+_YŤ6nڙb:2}XK|,}|j#?1/~2C ]"XCY&̒Z~J-l| ,ړȄtNx]7tߴ5uCGi9s ( jBt.Lgp̨LGK_QQyCW|׬B,גP89dSM[R8d pi:tZ?v(tuteQ#: Ui*tՕN;?3=Y o'n;}vxify׋ժLaiM[݃ߟ)W܏Nh[<_맺g=ߋ??K:A?W\riU?6"֢Ի wW˛۫k8A6xhuӶXCW~~W|{nxXl~/g/~0 x(Xsx>|絋7-q/6Y_Z~ۏYulrD}** G_O ݲ'8 ӼAލנ5|'X-E(1o#D0H*CPZ: VjH) j%$xHȔXR O!JPEĵd B::4[& 3e[%0p}^1Ұ> Ki)1TMd4dkȵU6M` rs*[ Ӥ(YߙUeBE+{{Ð`AV# Rt&(MrJ-|MPcd)vXEok(PB]:؁IBnTzC%5d\Q ,{P C=$~ S?<*%F7jeN"5% N(DʍAdX66QO˅XW*eO Wȓ@DNa~; E^g#-й ǀGTBVѳd1: )#fnHȱYjjDq;lyPd(Au|wEY>@ śNZ TIk^E9Z#we(KLH-thc& WQ i wXY2 &RObF`Y)FR Э@0x {@A4 j٫͆(UuR DL0زuwJDS>vSR,oK1`4F:,?t*oW$,JX0B3k&B5?v[F&tN˭Rtt7U Z+X%<GrzL.@ 5Άn?9XbdR,7 1 M1eGo>NM:\}[Ew[ùTL&ڵl5n5P4ϏOsgMB5J&T+pMo4,ͱQ C5}:]/tUЪ'Y[.ЕF,7gyNW@i9}9Uٻaȯ݆vB7Δ6{4vJJeѼ[!P ?LPa~iW]7Op`qiqW6h߬.}UO[ysܦg~;cz>3 |hq28. Go-> Y9]WɷXkzilTMC5}.:nf&gCjnpW!id93Z7'7N{N݀j#)Xe?,ɿZѬ|}딶-i9G D|IW+leI%|!.$mË溞uk>hxXfho*v־tW-d|g9bH9z%Fw5͇4b\[g @H>A}#$DS7^p7i YEъFl1!, s*rm^rKwF(ܙ1#j$2aV2&6c`RF(3eQ{%nU#MLIܰb+'8 4^M̽,io4u{RF<55jqqZL[R[|_tֵs>&x:M4&޷Ovv F~[Z囎Gx~؁[cBvgo;[;$(~u 4g S`aVC1Acq,-4ǣy}*~exñ5Yʦrx0MnGͻ<ںq,ۺ/鬛k+M37nW&3 I3 nՕLejI |ԕ'c*ͿcN>1.}۞ ҹ7׎b u~XӵtegE7ծYRz4hvMun6ﵮP&nv5d47o(8At{`[ŝmgK[rZGs+Wڌ;e!zucvEkbr- ^,\L6턡r|>23 3qmUXle=8o~֎s_m߃C)~*JDI2}X礦0ؗnn>^nb4_^/<:CZke3ću灮EU/ O~tߚ|y7)ul=m6F_][սkz~ˋﰿn@M(|Vկv )%? jړ?nUơ|C#"'#暂ƧfܹŽ.ѩrbpbxw@QT! d3o z+ **Hd[32d)06&m!g  b@&[x-iwoMG Vо:uRɯ =77QLKT?=0|= x-F y&lRN m.+ʧZ\cF}dP$"hr(ޣs[_tiyP.,o6"QRgtIP9H8 دx+ 5)F+gott{s=_.&e2Ad!cb$撋UP>4'FF +Kf1)F1v22]5^{QF*' Lb%b)Hk% 9 %'߄j5LhmKB_YGwt]1?i}[϶{b!=Ҳ,guDICO}VuLѽF<שkjY4I@$3q (>0@Ҕ Q:J!bj?8ARPӗϫ|3|QocT@_bU=xk~Ƚ>uC~noq~e>hܟK-v?ǒ_6?̣W;{v/_[`m3GXvBZAZJS7O' R~:uʼ~:ucSgիzQR {:b]VŒx`67W !9 1T] Qu&W?#<nm>jI[ YRRZ+5qCRd^s 4kcڄ}W3}i  `IF1?Y'til:OИ:xK;o|6\.}/2Npbjgbjݯ_ob70=H-ngbbd+*;]BTd 'n DY{9KhkC۾o5w7*QHP԰(սq~(;\Oiw Ȳ)`5IoktHUJ )DɑAz"clB*LUNQ5zt nz0P4p׫[}r9B!_P}Bjm]˧$2 NۢS4'iL,48_qDQqDlTe/Q RAX!w Iź ) h]6ȠR̮{qVd[%[VX;"ߋnmu\ cXmwW4`^A11QlAgVagVl=ҋ)7̮񧐮1cDۢ|O׼x[V'd1ƒ]Ywu"B[~r[>S6Ҟ-;Z\.Ðԥ lTSϱhkQ%M4e Td-hL-1REKU[xuH&!f9$] cW/X+=hݿk㪟{Bww%꜃zi'&_oM8C9I`NaU.:BBYUNɠ 4kRBm]KSZozdfoRmYb`RXr)eՁ$kpgq/w772T{؛կ)ص#5)^1SO,??Z|O-IZ+ʛ ĎF.z sMYݴ:uV,A$F& |Ta ^¤DJc(MJMq9l _G`L#_ *Ȼ"u!JEef99l:_}sq̯x=w~_f xyk{?``--SW'+I?fk7g>|e?&?G>v$wNmq>澊}w*#.4O* f|vnuq{?gO3i٭aw=F~8yeF:sEssSp49ݣ_:T}ML\#:y*ny7E٢W/z"x\tWnv39egHW? WqbX.,? ىoPVU&Vbյ,"n"p 9&1jA.]( T$,^&D'@3R) -Ke,DaDa@~ynf|׭|?}ڌ@1*0]*d:0I'bҜ ;ŹuiqsgdBPUIΤbzY'ocaT΅ѳ(8 ZviS)p՞0> 㝏e") (L8@FiSk5kӎfvm1=-3ftC}^ RT6cVPMC2&M1唆ls2:%" bU)ȚRQZI `Z 6[_t:P7$7FؔE0cXBe'*WK4a3m ] xP O2Do*иfy^-Fcw`b)dQ sIY|@6,IƤcw92d;;  g Z0hNV_f7[U&1ATД Le%]4wE2N~^iv5,gnǶ{ޙCĉV"솕,şwkhGfbeIeu\JTnvg]"=JEz"%O)7}V9{IZq`Dz*ܔo:td( '5Y#U$%K.K6eEÜWg2X J"&zK=+ o#"堒L5')m4"H E'[Mg˻~)|4i9̞ů"'ȷxzE%?O=#yooj~S!X~A)"$!cVHZ%$3b(QOS4 2#ZFgQb5mh0Jhψ*d) PHͦsfX-l&GB-B-&[xR[sEVHr? ]n̮_b Zb!_^Za“"2dULmɰu~ Rl.ҪMm]-ST E ْraf2bǛ9mQ VqՖ`w 7(K l JDfߊuU?Q1d:쥊Ma1օH`P褳c[CY e6!x 7~{ؓM7jbDZQ5jE%dPJΦT$)P> \ְVӺ8Àѣ:`lRUBNg NB[f9%UW..6el&%Ehld'K5A=eWJ"MVTs%`*AJ>2XmC]0d]<]=l&Cyְ6X5~ѹkܴL@{Wz}nR设wCCT]*v&Ekm#9cwWW?JڋdqswdaZ&$%?zHJ,J5c ɚӧNH`)Uԫ TH=_N‡z$CwY9@'rnO3h@HR`%Ȍ*yNO,KeHRGe2EPRHQZcAqg[p-(nh٨_9f0в^/E+:#Ub*:竓Mqn,R ǡ F6hC6\jBР .8CCww"-Ÿ5` gR/Y!3*'JLG۪am1bwџG6jnWk?/9řs_n9'ZE $y&@ EmRR`mƫ.q]Α21%$$\́hN1x>BAܙ99Z Ow!ѱ)Ѧzݒ66>kVSo:EUtsz9giS!w} &m0_-(lM`- 'q2M jRVu %~#~G| ^SA{(pؿvtN%O߿RH|XkWW㢵Ŗ_yHh/s?O.Rƒ'˼[|reFƐ}|]^z9W|!?'+*>w!3b KC#ŗ..j~`VgG5ot:-`GAt ވy{tzn*g?gђ_=(:jIRG`F#5ltl1CZR#mv{IVJG+8lva C„>Cp`P6PSC^I)2y<{]7vQXޠ)ԍ$2 A2Tc@o]tf}RNԭy|9*l65e{ZH7/1{'#a11Rr6"֚$ 5}a:MʧJ 2;Ƙ!Fyf? T jprR$/TB8Kq_8Cш.\DRLV B ż$;PΨVyҐ'}0U YJJAřɗ^D/3ob BIGNr΁`Nc'}('v(w:o5#R,l떙G-=IO1qn]B6+NL Gm͈m~Ev >qem3I*'ZZmFVjfzwM^IN݅9ugktt,t-ܷ8 z:kǛuY5rDVέ ڸǩR=c{SW͕j_j㪵@}VsqNm)Lloચ+i_ZLZno)U5p_Z}(jEBk~-]o?P:ܰtm\>XK~wq.^p:PCF-?_~~d4&~HzG9xj}ٵ<7*Zw8ɇ?j(SvL@G_d7mLX'ϐ=n4UeJVkUfDZ _d4{{trZj9A 4NvyA3|v`RT/r9Y021؁7Tife\&: ڇC?oVE@T"PBMST+Q p˧,AGln~tdGZ󺛕3w+9ծ^V/#bj.[WJ4\}p,ڡYNtVݒOT7/ O&ilګZ[|J:XBtV jooՐ^>g~qv_nVf4]^ۙrU毋?yUAi;ZOZ%@1:I&ڥyK*ׯ[U>yF׾sq>?Q]p?nVaoowYMkr ]Ժ4ֲ3ы?>C:_I)|SѲdw?6|Mm>%.W+A[W* 7 bp 7 "g 7 bp 7 bpC4 bp 7 6 bp 7 bpW$'(} HΎMV/l69Ϋk~{C͢q: yAe.KQPt05!6Wg w:~<}vmʝkG(渱br-ٺDCm,hwYVA_Eq;.pi[vy@{Kۿ|zNV 2SM.GcVTB$ 0eRF3qV_nbz%*$ W]}?kqrzZyTo7yri: _O6I>SogTYˣǗ}o}Xvtl0o.I\veto#@)% QS! {TXv_ hRI B@RUfZ_9q7fA&peHʲAY|e!ʥC&0] Lߺ@ONXM" 4ʐ#G0aV!@ ΐp@m)v[Mds9]UXaL)Tl$bPDJdX<)O:Z:Lgܾv=*4ZN*޳Y<@Ϟ][muVc1ncc'G _#!۬;".%oEL d-S+=Q8@(PPD]=D3 J+PSVqvf kL9Ӻ`{lڙl /®˦hTh6OfpgƅL>Zj1\/LbE!8QȐ2 Q}.H'i5%C)hFEw[iee+J.JCӪMQ\iT2(/F_F̹*Xu;bb>([Tm~Zv"Y6??>?>Ϳrja %QdHQ$,D`܂VaF Η ٚΩSVȏ&{>E6`0,Pl A\Dc̜;Li- ;vEm1j{[|#Z餲04D"p[1VpUPeIB-EcpZ"g %rU5) ~4\4L&[N]Cwfx&<\/ؙ슈1"€"EɤƖ`XY0(zO䅑sYQ8Z`)"BtY&ULH!:̆Ҫ쪵*o%Z1"vfMWg24ٙ슋1.\v(e[J"u./%_2e|]UUdg\|\ vajn g#ҕ2x[z(hk(X[Uc;phƜYno|zvPXGXiM&H`)<2;>P#zM=R^;꬜Z'mPy|4B$ I*.EfTp"xbx$]].,C:R.)2BB:Qkݕ9%.vTO;Yqv掹4ookVN!)3{u7fyNbH|jP9_ݰ)r{NU87)Ycc%6hC6UM 9B^3HƁ> EydKDZ&1qR qv!eYlaE9rtNk}$뺭z#r"'ùnRE%K $SY$ SHlglٸ3 >6'ɇ="(v;y{g˭$uRJ~FބsH<\l\l\=d?ajgAyyr@^:BE y"_$|$Q5I*(͙?D^J}Zmڱ}Gf?c˜(lN ;ց'N~jot&3qv`kʁރ6AFERCO26dNP@4M&tZߚ?lwJߠТAgm 6 $PԶgw6 ?AȖRF<+phueIۘ~fFj)qڀBP43Jec eI6j,H_߉%BD _l\oV `I¥\;-zj= i>Ab קV nt`5] F̻ޯU>P[TC{ =J8~a']t8jLKM)usrxv=HMO>" b(,GW@O8ftk rrZ 7 9e^8UtE8'tHJIg\ LԞ Zː'n{"X;$z?䘻i.WoL٣ 8U쎞l=Oqw}5OW/=![jњBi~̡O"fы)XSc5(I󺘣Hcx & BD -(W4s(Z쑿oFQ\~~:y;trZϿ<x8a'>7_iy#zN=,FgOXi ~L8U|oN=L۲b/3g-_&X*~n>>.~PW J+^|z=|F_[>'K)wǃ*l/qf*[u8+XT!&OK~\,Fbֳ\[AGqvR>qPgD!$qw@AL26Uʁr qg(}gb9@c0B%}S3m5,s֓6g4Ed@FbEf"ji(44NB \F"V悶4KBpZg5="<-lgI}K < nt\+J51m*d,̻*'0T'*&%)A$u.wDFm"Oio9\:^3=Zg \tZ6L 6TH%hJ5jD2+.h6G!SPR<Dh؏3 ʺU4:j$,$ "Q⌓ 9%AZAy[A73s$G7_jiF RKd"PT"+VkwʄB J0NBh)ҋ+Rf+ F'22DR:HL{턑l6\V!G/)Q4W4)WΉuL/L׃Ip͵^P{>Fe:SC q֢Ug wGw>wtÔfK p9!y c[eJWÞm"*,TH)GWgm}6dZi](72JG\nE1z"\0Xt& >FθF$OZ+ 3$K3"P^q@J@ܚ85x% x<NSS`_z(5~EYȳHYYO&6W@5EbV" Wx#F;9#ykەCkEced y03A1O㭧81EպE|G3ލcmpz^h~>d篹Wi ^qY΅9 cQ{"~,DmEp Yd!;T* `#9}:rnνWGmM 7ѰIe[xLIĤHFzlA'뜰g!(@@I 1C$8$ FB Ğ֖8׌+oT=Rr`3<|N|pFM䜆D2IcNxT45]TCck ZN*Q? i똣�JYSl$q= ` |Sf1﨑DH@j<5 |&E9 ӹ|IxM 3:3$BRԗrg:Vo}5> RRҁ EPD2AcxieaӱcұqfktVL+֜ufPx\p [b:SA{\Xd$VKQxch9|4[|-I*}ѫ$ 9$rFF%ܸKh~)ST,F3PMVc $too=ghq/CB~hm^y}ɟOg[G[Ϡc^K6<.ܠK,[zfc7%ւ1(]']5s']VU1]t(]hѤf5ǒA =]9`V ]]t0zreb^BO+#;d;0DAK􏓛O~%0er-{ .z1i6984X4[znV1M4] O]^ݰLUɗ l⯡撟{~PΎ~>\v]N9ɨ3onơ_ [[/+#݄pӁidzq~}Z/|juߩpڏ_;C V)1$ s)֜4wny:,1>@M`Q w+s:u/ E\ac( weXZ بDQP$,jJɗQF8Mk%I)*X_ub䥌 6p<\hѰ yh:ΔG`3O̵XUf%1]}7Jo9VBxOg UϳV(]=JJWJjסWVqZ UNʹ% ~v8=9mӕɟW+M[lpre6TId[ Q@[%,w ~޽f]Pw7;Mls7|'徉w'+!B-CdT@:(~g)tXrֆ:)Z%<ZzxR.oWdmu\{~7ӑ&N{2gLS|6_ŤV|AEE?~{nXCル~ݤ6_sڿBir x`A3fwCl%T| hzҨ4'IzҨ4'IzҨ4'I{ڮQ"IzҨ4'IzҨ4'I#|)\n|c:W̜tuizyJ\{~ɣrBUpr_U)~ ~~􃀟R*l/#gXmsBt㙪qx'L6WA΁ fA\b5٣JttIĞgolgb7/_>omw7tkjcO~Et{6;ge5-s/`ebQӥ2!)1!3) !Rf dȄV]AbTM1DTՐt%*ﵶəAU=Gpo< o&ۺA;Str偘Ff!s41)'}UPD Vfl> _O\zғ>ht.mN\6_]"L߳+Նַ;[|y~W'.<5|h!~Py@O܇g D\T+]jȨ3&mc4%Xd &Lb}ye,HaDaؿ 15K) B&`E RY)s"S}~醽8'a\{ŹÏzz!)k6U"QeTR)ѳD_-Tm=F-s`4Ɋ9Iz518#U'l,Y2i.?M '몤pV۽tEYrli/M\]!v ņvhymj:-#Ř3T\|&cTbuiA"J(:bPƊt ^R4yPC d-.m[eG̉U(3+% )RΜr|4$ 0Ӥ2V*02JDJV'Y~OD M"C:Ʌ+]*2Z(PcR3h4OB$3mt5F&%jHh aYs\u!V6T̸#k juϐ$NZ !q5#dcCψkobAuhn 5Yr5Z1tfn8l&@0*WEj2no;tu| Dz(X}^=fK*w#'m^PGÍCPQ A$r@EA!2ԔZR>)г -{/7ϯₘu/yv2:o3b5Ҟ7M>]Ari>|VNn?sT?siɺ/WWWÿB#:}?{^yK:].jK ܕ:ґ" RԄ mXxFLU` AH+(AS,(=ba蘼I$ L.xPe؏4دXK&T{]99W.Q ;_"8W˖şTCn^'{:ȱ3wޙ`|;F^V̱R+=ޜBN&Y)GyETB$|lLu)=K5߯0l ]wkOxxVqasmzy)>g(ve`V⮌r 'O}rR$"MѰSQ Kr$:b.:2#kb6z@2dESpxj- 0h Ո#^mHvyk$Z{]> ycf!2^v弖Wv`s>2߽/P{)>k9(%: 9,mv.%$gkJRP[;S52Fno+t~7<هM!)GYR"Y=*=IbH7΢xmCؑ x+hԡxpgeuqwfNN79* %BUS֡Igb`цh6˔2)+ Cc|y_([.B1ToAW̉M*Ǩ(a\QVoP+^məobJޫ*~o5#R=teaUĔх$䈊03*' QdMm=z:Ev >s-m318ZZmu^Yxx52K Bu9XgC_x,زK75:B$i{x /"Sw=~yHREAXlrb _Nˣleֻ _6+WzF7#|iw;\HS1H̞j4_ l-lT(DAPTf2@/a}(_uA 3fVCIƦw.HM!̐ tLȪP*goێj*V}q߁.[Iw~`3qqP8mUij7LLۿ/"1bǍQ E P5P JAiM/]L80=5dIްoM-MYmSh6l$Vl_iV{YqLuxrJOC{E?%E-"p )88#<.1py'`5$IFA X,iF)s$x+cB┡AX!YDTI- *J TiXL[P*J-m!'em!'.$6xEww]3͝5h`8>|=ĺإ] 6UHCł\Da]=5CQ:A`h.,MW)4A(J>&)lb(,{`Km6-li- 7r2" SqƼ&>w RV65I;#@!E{5ښ vMd+hr-:QC`blS?/ Xbc[ [DV-bF2"љMyD"TLf=F Fe%jE@ty1 %DiJԢQ`)7"f.ͣҼl?D|k:m(ly.\o܆S"QY)*O@)6Q( \#E]܅]{Xltlk gs-w` +b\Ng#oUWbHO( [lj~hx(}V,QZˠ}rM7BЀNЈDpђ"j㎂n+}r0NvOy2pRg@ 2,I" zl਒jW<3Elg%gəIi(Gԭ'sx<ۭŹS;?i֟[GeTɪS$&FD'b:DS-B!ﮆc𮊣cLQUDsBVGYLxF jT6K7BGZ-fEPm+0E$OnQu|/"7l2= l=a@<,W^r#$5N@`-fm"b.Ws1<$TӳTYtk8Q%"pgBeG#;†Qz4XZR8y_+RMw#['+;tgLp^{[H= ؄NJd;A4P[XJBYB:{[Q8d0I;Ҟݩ, 6! F<+pfؿ܌H"8#NP fZ[l l`U[mBӊ5o&-C'A%}#Kъ6+9-'aKn[vJ?9BzzۻXeԬɹ?8gc? '  8k|!A.#?/~;:|lsϗ칇xV_-AnA_^ /8 ^RQ\x8wO]l(tz:K]? CrWwR FDZzmW-:\h<<DN^D~k.|kr-lyMWma_N{nQ#o-ûNܯ>]7m|]wtv)?ߩCeW;z8ξ^_xzF/P;EkJeRҘA?LrnFrr8`)&8NʡBw2Y+߿ފ, Rΐ󳇃߾oa YRlh !O`ړFE a''_9/r; /˅_LtNgs<j|'q!w{*To9. ݍz#P*G듉CЌS02wʣyJ:#_~/t}﫢t@?U_QJi8ohn]].;TX1S7Nv,BB.ڮw)Zo:6M7\j=|ON?focty[mR6v_1n.0C\sK߰ĝN%= Qbl|Db5`e1K"4LƂIrሏ`=Pf!h!l]I) >ݯȟmsO[óUuQUE2sN4ںDsXEA&) R+S_OϏ̄vg~͚60~F Gً U 37k IS y-Yw|/UOV>ٕX)\Xg-* XnY>$чEWLjE\J \bIIPDRBHψGc|bL&'_ 'qgD#$31gG@V]B!rKcb߶e5>-vcj q!AE ^q2K s~6gFd$k* PEQsiԚB9`(pZ"*D&Fmi 5s kGck)^!KrRmxͥV"ӫ@b~ǷwV+W tU\`Nr䲑zNԹhiÎdP7Nx󥱎߅/^0nQ3qK{үQv3[y/Rwۋ̉UJW *^%ث{tѪ{`UP *^`UJW {`UJW *^%ث{`UJW *^%ث{`UJW *^%ث{`UJW *^%ث{`UJWkcⴁ!8ip$O xAK@f GƁc IS.zzK[E*^^ܪ'+rAւɃ+DP4be!Gd cAJEW͹q.s 1ݤI[UI$%x5Ĝ&<(&UVhjm>xڶ%elt`< VO<56/}h}!9Za=GCI4ToW*&F$+n7na( BZ PmuE) .#dT!z42 2OTXnXo^?ȨlMYiWo| N>'ت8=G1 N1}ol׎gQ Oז&Iƹyp]h[M-Z 4Ƅh2H#X@42fIa@9Fs>A}v+ɵCw˒Tb͞XX3^AٱbenCWwC6ʣ[Q>:xRKt AB pB{/9G3"0B:u 2TY "`>K.Rg&EZ3&J<\L|#x2ҪܫŹ'_{\=)Zr_Q=G,|!~i/9r4Yو$\$2l9A7.tݸ  "!啀` Fj]C9G%x_DR]JN#Ĩ$އ|DB:Qr+Ҏ¸8;#lYnرEg/PU,f3eΜH('q~HJdQؔ'vlB[vŐpݒokUjۻF[pXLs6\f7z o˫9J7]x`䓦-5놫Јfq/4Y̐ߩ6 aqtMXm0bwzڧinϼrВ}w(oilRz '^.wKw:5YCD֒g34սP^Ӕ'WGW `e 7WǦ`^8T]␈t5;9IF2 fly0Υ{f)(Ձ(' h[w4seN@ps9pN# s0Xd:/ b8 `X( aS,(=~/Ik =27)BTMα#U6T%~屚ꋵj>l_z'2.H|yFvq:k^|Ju_ظ8Na9;wS>F?v[?/M_멘Rv&+v)/&$YUj )衖($Vgk]"]JOwSv %{{ + jIÇ~ ?VsrS~sF{}HY~ W CL?.z8GxK1P$Vɾ4N[ NDq'[0; αZ>rŕ6G**8hI jN* LL,$;՗4UQlbgE?i2cpi`X$| IRRJd=GW/%IżOӣh6d\rfBLW%U o#R=kЪUQjR1HgTNLAڇd LoGDlz]\nl,Ev >9nAnRYZ^YHrpr5KRu :aSx ;2v7z Cv7Ašar0?nJo/#;' a*9Psf}.xiX( d#bMhy8_i[ j( x6PĄ:BSP]XǫA56*$cSV0eb"gEbVa8dB嬰>Tҏc7_^ q Y/U1?{,˯mw~zzlሄk=)|4µ\EY؅k'w(\pEYd.rn}Jcbsɯ§,0{nI:(?䩋id +=,XPr)6OŗՕk ~˥d o_vA3M,p"f3P u0T.s,bo,p!ZA% ?FrTܠU /DՔZ~ڒv:nlmIƏ-i'[ hĜ8XMTl1r%I3Fen5P1Bm:[55XL)92bdbQB25Pc`F.;?fΖOB*ߌ۽oOK ispRa Dltoj wɛyɺyH3rvkH~|X䃺&,9&P F'3o\[ΎsXj9bEcGQZGhU"seaTds>r&(e1T͜؏;bXh`½bEn6fa1檬ٷYZN(y_0˿f7oc4fgFO/QpKg rl"p [%wEl#5d/d-p 6atRU#AR!yٌ"acWVQ[M=1'Ul)d0(kioyO9 )A^J"Y! 3T(Yt2 M qB̵u}KR]|]͜x8Oo0Ǯ;#qBħ0\=ܕփճUWϳr peWf]ob#>z  \5k=\C[]fcf-ÕX+pW1f0+1ׁ=jY&vʂLGWb0] t,pլ߻jVҴwJ~ t<ɠîUd=•Vd`t4p=fcf% #\!{}Zm]m2vr*{ü:ɧyI f˲W=h<+?#:ߟ~:Y<3kgI9x-(R ,^\nw%/TĿ3_/^r0p i`&Hd3n=W1qnH9iD\v/G6B7WEW&}2<{Mwy.8n2goUQj{C"IQ'wDL&kkLYiɽL㣂+V)* ?~.Ҝcn߾ |o_B|˛d۞Ѓ><Ϸ15ѭpo{ߌ>8!B5xSPhDbަ0X`&6AvUd6} d"xH֞(X,D,RH`-Q`KF1TloO׷u qlkb;Nd;x,_YQpfڞY5?s1iՐ"^": 'F7.´q1 ,ghN'15q b֖TLVH v)uŝXw= Y$(%3h9EAhSU5XH}`;nWwزο\_nÂiv9,z6׋0}-/n˧ /z3z>|Զ>):},M&6 {6G\!bȟTۺ]5AF/ᅪVwO/s3Lw?2rZ[viO;`捑]qχ0J<דx]~DO,F, '[맧11/ݙ.j={WaI7_#9UVI&9MZ&@<.ͳ(}C3Z<$E| -&jkpji[!+7%ZIzy{l}ڔ=-&^%)ҝ:(y帘i,C?? O~Km_rnQ,k]KX+CF B^> }sVzq&dUte JKYȢ%uVKv`L)bO{3Wk#;DX!]vR%)R=*a=-+Z\JC !pvJHEm4ʏ;@KζݣKF45GO&\oK(Tո@Ѩ&'U)G*`a(fb V5N8Y!xg2Xѽj{} Y}(AS7{MKaKyyRܖw|n_c]tj@E1f-2cbVj*:_12_HڊC1Ǡc)ڈMZ2Rub,XL*Cg[&΁g2&KqoRsj'Qa֙n cYL"jegAN֓Suؔ/P`s]ZƢS>]@NkLZ"r "VH Vh0!#k@hZ*ЪLѨl,e(p\wC6AХ&$r@1-(V19=ؘll^2kڛNZЭmhwp+n&` ޽|?_ 颤`d!n7WGP0C5 `U(8<'žG쩒~y?{Ke#l\'X^FahQcr1yryerRT0Fi[/g{ls}6l!Mןҡsywu~~ʸ>~[]{!/u~۫_}]m \dk_0kLp^6EmSO,6Vw.zH%>(s2Hu_C A&х= @qPnq`g V K=Ky^1_gY3&`bB6g\'-R=sjF +sxJ}AòDNq\78Xv'`&vܷS톥ux.:z%tdP}2tॼ8*Ձ,'<J[ S!'p4GZ!eRuVYUY\\Uw`#Pu4h[1tGKSJUYgڎ!M9k\gwĪvPrF1MH|zdj4_Q}=6̖9?*7 O?4YbܰYO) |1ɪRSLrPJ6; AujzOxw]!4YO=3x)>`9Oۉ߼w hF}{KY'S 7WqVHÇg=8 < 1 1\ IOENU4 09Ct,'dXd.-9}ktUb)F!f* Ny'G ɛnGՓs.J'ExWIUFM9zG&cĈRٜVٓ)%Qh`x ;I$vkEf]I Gjo:|ևeIX-w}%dqgTSvϦg\qԵC_I S5ηXNn; ($Lhe 4#R7"58"50"50Rx#f TF5q F_mցV;TF*exl[F'!|e LJ1 -Cj?wn͇T2u9,eɷdn(~awDe^>=Q[Hao%TFeS"VB/Z[\UN|V&am]m5d,wx\gQ c]¤UL <1 Yq+VZ{`ͪҢ{l"΢c'EFhނ2 ,!Lu e.bU$κs-̖zamhAǐRvs^y#=Xb4RFmS֢'i_FZ=C׎?_8;**M*@"[PP9qtI [+jē&.g:2-;15eVOB*E]Lh) 1lcJWJa283OLyxPE=Ueo$J#nWTJZuGEDjXB+uZFp:")eZAUBΉ00΄Uleh+c1n*rw@6DZ*ov} ˻&2$t/i3:0*x'g` k:-A?/š2dXo./-s.!aʍ{8YItUă-]ŊeMG*%g@5C?tLq 옆*Ѐ]Gԭu5S9TOE#,ɺ ޣfHX M*ڌ@J{'qd575n5L9-uX귮V z@PȩPo!fkX [hBMEky~XmMiҪҪmOZQ?9QN\-krTLIblM;[VB<)hʩ|(w`s+SJfV̤6ULj:Vd#9-os`8GsoV}~Ku>l s>]T|*d.lR P1b :R+N ƚϭ JQgqUUjhgODt*b$XgA{vCPz%;˚K˼E;x!ɑ>9vbv;>e?@\׷6s]:ܺ3 vNT@T-H]ee ΌVDgeM[d:)T]A̪E=bVq=+ %c]쵷Qks@nX_B/8ZZ5“{ڽ~nxK*2 Wy}֍~Czu="nޕ$B#"k{h) I I}}#)YD:%&UG|e9쬲l7hL1ONy-Ho%$Ğ5L*bKBf(Ξ ) lJ ja&I%36 $!eĹx\XFǾ+6Q`tdo : S {Me-+HCtһdA&G͖g2*7ȁ",sEK g?nc?ݱ%SVg?S M'gS}\s[]Xf ^4!Ր5ulD $j ;v(Fv(4 %9AW=`b,j L *:% +/,d'll;y>r 6p IΫ`,b,E%sm87 \R=ޝ/m\ v{8otYl00;eWzl̏1@*"f6]XRnTa:qxY";+y9 <9wv( %>G.HUU v=FƭTJZƤDHXm.PeW7pA5w)q!X@zcxgI B` Ĺv) jW騃7̑;KyCWxL0` s>.+ $a $Ta $%ݭl¸C6a F\GZuk;CS')b d}1Pdžz"2/wD~'Α_w|]Wnڠ>٧Cd:nqTrFh\[T2Iz n&@xi;3,1,ۓ T T`ƹleﻚ8\.xH?|l'.10ԏpeQ`J|,}9SrО.6ޯ-z(g>>[FVeriDY!p&M $ :&,:# Bz`P\iɘ`hu2ǤHY ! 2ȘnCs Y#Y$bւۨH)B'H<מ# 20T`r${7Qݦt)YyB'qS}DtY`KF7gY# _<Jʥ J1´Ym& %/)|6֒~*rYTfr ((8tLlbմt(}M :{[P(ٝwTċ/ ]~Z~pS!2S^ -h1˔M1:62׫}r.[;m. Ll<zB $oT<;I4`> S@׿/oA{mkoGgã/K,jvΝ>Mh.|3xIp3Ge-rT͹9xgË) /9{"~'xW~0>Q*NLg/toQ:ſ+lqϥ7:Ikj܋,|<>䆳g)]k~He5KHKm?ܻZnH#ۯ@ҭÛΊ_+ W뷔oھi3R%r-xI3QVCyǗz~C}_W?lEB//P]z'r,\rsV|5 Kz3[BF,ǮZע`-v>t!=x.]眵cb8kǷb kaQo֨4C.~z5#W츯Y,qAI9whlaysjˇ<jo | :eKLi8c.:vD rOb ϼ\j瓳 4o^ca64ТQx;]<01E[^t/lة\!jP+0,Zn9st3/oݜ$NXyJ9gm? ʹr~< IzH JJi?&m2W-G<ݎK@J@(b; 2/\c0MrQ+#YV=U2C>iivqޕƾu z(cU8Jmo֓ 3MT҈=iJR]1w@%"p)cn:եС`Kݔy\\o%yJ[h 2t3As4fii0^v)voKOt4URGADƻJٕa0'0d\а#cJp)HMN0ÿ2_SRH4@! ِ͹&HJb fװtFiצ'\YoEcsύw$t9#9Q匱SI,H0ÿ{щ#NTDdIB>!*(vHh@XWiAJ4HjN#^] SI&!erN@ڃAmBI^ 3KWQ?=d{eoe:M˂kID] jr[KNV1͜;,9NgAd(cM9v&Cy7j|HS묌}uObO843*W&r)y2\55r۫~AnCI;%m3,U=N*wVj|k~`\]3ol{K,ߙ\Jyzv>+#lڸ! Y0i+jtȰH2oa*0?&־&wr/ϝ.hSꇍg92ٲscVZSaCFXe(SrAP )i]6:kk3p!+ 5Ƅ,J3ks2UR=w'6VikgE:84 r=rl͟'y Ӈ{HV/Wi8,o$~(YHZ`YHJ\Egy{ ffAmN[uwŮ{Pm&]BץC^̯TF=$ qUMS oۂT1j?Il$k-xdDj&#&+wu^[;N:ew=ܸe̩D!M6%uFơ GCxXH<"='ruH)Xc+Ks0pUPݧ)pe׫'6>0\&|`JnnRb(jR=\cx@pURj-"q=PR+UVڮUR!\ W$x0pUĕx(pUE:\)pJrjG{7*hicbп~Hb6^,gm+pn_\Jq3sXA̽ilб;|w=2g1vYn`rha.35O?ޙ׌]c=:ۛ6$g?\X^u=NK^"Uffc>dj; ;,?uFqO3n;J/??ncB9#v.\|rcD/v?B#m3{lvC=,7Eh'_`jCۨՔPQb SQb QǮJwRbQdhiSw*?vo<$E=kiv(A[75c\/i3mob׳Aw]^]S8KK"Ww`߼}soֺy@==ϟO{}gsn/.=tj]\em{cwe<~o-@C/x͡[\>fs3nO r+៳#< 9HeD:owA>WG xhGxIww_>v{SFhkń,hpdi-{Ciq+\` [Lw޷n:I3.s TByɫ9լCk.)}4w>x9[C. ۇ.s},8lRb axÝMcHۦ=lᐯ>YJ916ahL+-+ w_̀;8= 'Ԟ`˓jKDSj@kkrkI$J,0U&Z6`Cԉ^#]n޾B :Z c5}a[O=Ϯډє=08hNdILI6ejpILE64lh($^lm=v`i2tdejh{tP+7L()˿zs/j/־{uN ]r[]=ʝPK7+V'CW UCPjaNt J)g]5UC骡tWHWZvpBt[1jpZtZHWS+ҤLט]ttEyj_#]< ѕ,t׈UCRwCDW, k ݄ 6BW UNW =]r_veBt՞-QW .ɩUC`C^#]%q>]9Yڙʞ8猕1nfz <^2 ̋-yށ=yƥ,_j\ |4h5kN=Ѡ'$扡A{1>VLqdu/gq;yvyW&zJ\j2\'3lk(ɽN 8!`hkh9vj(hۡ+z>|z{xa / >h/s/{PKÞĄJ/^|Ӹ*7jh{M骡tDWU짤jBtƫz{t,INt J[缜]0[9jpBW@8vj(I]F"' ӄOj:vJ%Otz8zJsW 3˩*yS 'ztegNWV]Y)tJ5Nt*{X6u3#J11~X,"~Z/ ^jvnӻ\Ey!~X?.1ZYݏJ߾m;!nC\σH6_x_ap˰^-,]ݶdgsnhC"AƊXCsu5ʬ̼: 5ޅw9櫔ޜbWn.uz:6vr!j4xl|}M;ᰰ<"2DÆz`<<!5?/loR>;8~xns%~ړ#YA)\>Jt&~72<qVkuFfȨLTA$&|ڕWn.gԶ4軇v姳emMWe.`U 2 eSpUv-E h+A2ƪ-wt}yC$E죓0 Ik@mSFb*eƁYRu"ې9|'(A)vА)gr>W^mMP(]JNs D%踘h])Db/%K:gxIT0wUe1xEe{´״o WꋥZI9*m]5BJ(j K#))BzDhi{e@,=![dRV"1I6:&UkPt$)Q}M1a]n ^'@x$% K}TBȘ-hT]{QhRѤ2Cdd!?Y 2:Z ,lMp\j f!Q"vOhx@~Sn!/,qHR%1Қt k,c0c$lױ8ȅbΛd˪$(U9P3hGdE՞E$WA ؙ'@x}10yi*-^JNh \Ire_Z)dpHM"h-rތ Vؘj() AJdM!>Q`ZbsQ;l" {,.vԺtJIdX&ؐWj}cFE r/XjY)dU(! ձ>\P]ruۃa I%C%H+! L9&XrE@ #T/Pt@[Z yVBY]E^h7j0(%<ĪfJ\A2ɣcmV";:*4tOU _gGf3昘-e]80`q(JƬE5̹(fȡ " qbrEΡd(( dJPJ, R[a'Id-tA*D$0V(Fd<6M((p eIҏ d>b(fF/BYu,>(:ig!*2 gQŤ z\EPB]^܁QfBmTzC%Ōad\Ѧ!̽0(P,LhWRD&)dDgNUE1*ƒQ fB%76b JLB.% :)ԙ!l]@(BA[**UZd[R2AAMT3!(Q@HseԸ^j F, Rl ml8)mFbRQq JHޗ%eڞ15id^,hWu$$KuZ4+ -Vc ZYLoweXy9xе?\FM23ٶW< 6-envcňY6cVɣ8ih1PT"_HM!d{+8# `߇e;Àm_mY۴<\_]]/QUۜ|gtM]&%MeB&0x łK< >n9!s =EI͕ܺJZ 7E qhhTh#ZB{Pr deBU Yi⨨Lt,9h\h= 0At|QHVAkթ7AVT m ~XtunT% 9UQ(;# )@t9%B_V!Hb'#*a~<˫?*u2$אZE}m6@@FB zu)]A}myE߅^| PGu5=VU$BLFSXRr*a"m.tu B4>5 )$#nXjU3^?kVWt5Aσ_;-@#6)E6Ǯ]?XT$ fr )h& ?AjDXtDdJV<*Ø" DEUډ䳦d\9ƿ?XfHG,Y ¡k$$Y MxjڀJ)gҴ[oY^"`!-=H,0E>WӬeLC&lZihlm nz{b^~ _żM{squY6k[gL#/k0q0qt3(ih t*!lLn=4'; ,Qڽ^ Qjphv Ƥ"0-G6 Ix[!9njj~(AZ}( hGMJTAJvPXF4J>JHV\-xA7nkͰill+dO1 + EJEs>(Hb{9B~Au]BgU@ f ݡPE =HMA՚W=guc['Gܤ*e(#z hNLQY27 WjA5jՃ*E\#|bJgR &}*޹#iUc֩bLAO/K,J*Fn:tm\R*1I`Ob)V'kUF_YmJ$M#FfjcP t.^U<L% յc:UNj6bZ%%Ɣ(8pkY26{JjS7X2nXZJ ^a{b!xo샶XDzRUe7ӥ>"v0l˾*+KPfF ŒD QvK="DQUΛMj`zko*BBot0 0nԋBZrXonbb* ⶠq*B(L* &~yE S7UE,Hxq|ӧO״6O1z1߆vEU]/1v8ъMyXm4?Ę~)u3_O7C`ɏ 4sLF/5rZM..@0 4_9B @L٪{rt7u>&ݗ6K776l'P48b7eG5_[I-^!zR70]Vк팜@Y8haE4X!:tR@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; 4`'RJ@>[fX5gMw\h; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@uYB`cq!u8CN @Ct v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';qP@lt>N ĵ6'5A;2v fٞ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N8XZ/7gi)z.- ,w4j~T-뻛@0MJdd\"`1.!n*Jq(fKzSgHn2G1Į)tdcg4ٌ$⨰0HQ-W?>-86 710@CeL{1٦bG8M߻Z>^16кacCxv]ϰ_w8LUQGyq9kg{cy(~[vlu3HWj7wORA"Ɋ؄(unkU26s7]]i=v*p%n Rq0!|vj%\s+u6u"Nr\>2]8\ gzU?#C]yFe$WlF7BZ+UrEڰ\ PuFrE^g#W+4پ\L>"J-Y(WJ*+\FrEg#WE@rE!ʕTNrEAf#W+EEV )b\2CFr n0f0wEZ\ P2R4wEf#Wk6 ),W+ 9EW2+uzR+Tjret uehr-KsG{m|5{^KŶޔZ6~k{vi!`46_Ѩ.m' [:^1Ÿ0+}USz!=RլQ(gkA\ǐiM-m+Ҕh%Y,]U*凨P@%ZQ(B`붠Bb[WRkwGr>n&J3P%FZ2&оw|[$grT6jdrDm=4 1 Z[*E6}"J^L8鸱ԙ'xn779#B`L6rEZ"WDk\rE^\ PŠ AlpE']!c\) "، lv/WDr5H҈{lpE@rEֱ\ PtAFr*"\MtE*vԚjrexu'WE>w I+=~R(!ʕSɜ+>"s?hmH^ N)W*5)dc̪̭^j-DɦF&)nrzul56h}qR*7- kpl& )IGԉUB+? 8'3ݏ\n~!17!WUy{\n\ej YD-}FrEKt>{~!yjrrFe$W>wWf]I^ ʕ6X\!B6sWDLrE!ʕ1Hl q&$ZR+] Q)l. X|E~mr$rӝdA Z&`J[wzƔRtGA#ӄ{nr?ZRiez2]K:DTIerId sO[l 9e4D>䄫 hH=!2_hl{ <פ9?ѺW!eFUko|5p͙ԏ֝iǃ~!ES\iV=||FrgFW\hӗ+pA`\I!ES}+&"Z\w,W+-\ٗ\Q*N(W㟜8X\fJ @.WD)Y(W&HH`?`l殐VJ\ ,W+̣*W7@.r.u"CX#WxI3 M0Zaףޅ`۠EtKt; ZNo^'r Pq7 ?3yQm. O7x- lt3V^_l܏_2}u.z,/n{ضe7\uukui~@k@jEeeDmomoJC}zN?g>vk"EPۼ>͞Q@4D_?~/xnQ9rQo֫T?j[l{S?T>R)غ&8Qa:=35 V6>W7Ycؽj {wľwn>z=cTr|aF~2_̮5Km`腗q}y8A"V.xա]E=]v\[Sٝ[ZWb{ƔT*lDlkB]n6%h_\+U{7ZʖB@S!oZek_{_ewA65\P͝%Rֵ+oM/;kMDhWcJW2ƠW;ct% =b\xyN .<'1X?xa p8nWScW궲љ`l)3ƸRT u0Α챊ÿ > lgLv|<{|VZ7[Oԡnn:ٻY+Sfy.ϣ~m]k7ԓO^?;x} o ~g۩Ѣ˦v6UhCS^.oG4MXLfoƇžE>Y^b]W+lP X_/4}^bE'ZVݳ5mo7&FIV&~KyrӬGl%ܔtm 6H-^. _HhsFB41x?/:cMnL9jˢ6*qu\,t+ʺtFB>_fӬyW˺Y^|u~ 3XߜR:qIGp. ;=ch7Wba]77LFcqg.FɓNtfu 9[o(CUYGpP#lu !***-]Je07{pVjYZ<}6nZ7U>)-m Vɨ%|-l%|>Ή粌+-o6RBw8c6-Xa{Zv~yvz~5};PϾvߦb9LB>./_j٬W/9,5XU-īiG ]0ԧHA-e[c0l *Y q!pE)m01"<"P[5c3~U;iVpJ)Lp}b>._COm`0L~e{HvT%[/;%lXU4b^C{x@&EG,$+cnw_R"9-XḋH8 ٝs))2 Zv 5gc=z@s6[Wz;%$.vp|^[_[;jonݭ/P 32+w<Y IrƔAHPid 32!D^#|xq`mAI :W]+ȩD0o,j"7ESTibDl,KN&z_zJTki%`0-bO(jseg !}}[/W)23Įg\+-wU80_xx'=S76 ;jw,^6\ow/u"~hGNJYlN<vfJgܼS7*-ǗYlʃSS\^׳Goc!unUy+Wwӯ,?/ޗCÍ@oc֥jgaNdOïMzn"ǻ^-ߺ,-]i8盷oZdvէS'eNM h^,N`1Oߖlg~^y/xzBdP;*A%mjL+x2hPy5K8j!NRKYtޟcx3ׇlѬT Kv4C ~>U=rެLӊe1&7G0ƾ6's`qGU4{%rD*(74+jGaXƇo"X N/6_Z_9wh{$:~lZ2JnSl]ߤLuCSm*c4o\ \ҢR.];MwԹZpO:WM7).+9gl蠦bQoSc abu"/S *=v].Y"mP+0O]izn>sB?> `;:=Uh?aFU%!ӛ$x"pJJ+ nEH"Zja 8n'K@*(̆|]B*y&xg$QMw$˺wz82C>+9'.'-$Eqa}UspC=T>?XO3l w"$(֓&[ z ('9m*i0) nqR>0tGImyTu"ePh=QoQx!*JX|"@Ub6,rk6 4 .i+I7&Gɴ(Ȝ1& k`{.xONfn- ;^,WTh*/_t ={"b>1)(+ 倐-<8k87d ̬*`@閭hcqw8Iifr;JǤ瞗lBt3x!sWDrY->*\ƲD 8ÿ{r}2Q@$ɞP=1рpX "ShQ!=!H*rt=퍩Cл.XaV1+4ωIV鹍r4- .SwDԄ;G$izeQ&QJQDe*h6j&xO[!ڍjmLtMqۏ,mۻ.;#8k}gR%/A3z]U_ml%/2trsU1 ۮ{CdAGkHzmRm@=G#aYIC@G$6>b 0HY V蓬DQ ߚџ}]>y<CƁ)d *f:b*C~`WFJb:ɋ lW?9IJP%*4;≊VVb7QF||"%ʨKML$Z+Ge ;/, Ƙ2vӢ*@#Iԓc69Jl9!7FD2l5gY'ubQ]Νc6pTn=<TJ0˛!>S (1clSd5/ThS"@'?C9N0GG(NH j˝(2a(t̠,gcJ"\'sBӤ)iEì=pc@B qYd.s}Itknw֜=7bȟOgrf 8)Tt6ߣ3ZugX]g4f޿u]g{Kwjk֦X)ʃ5\0vrn)]J|ػoֽj+܆v楒C!ݭ܌=<1R0Xf↻ܭX<}驷.EM?,Nptݻ tPS%s#qs gwC{rbzF *IZL9dCbD)]h74"giJžèpqS);=Va~kc4UדancQ0WY$QdrFAsx2ηx-9*/X잆 j>=<I/( xC^Le2RECp>fg5ڤ2S;6r$+FpwtBd1/O?coc9$߯ؒ%In[ͨw-n?]EYZD* *:O=w}[OӃ~2ґ> 4]@BҮb|n&jODNei3/W'$r*PP  :[2&d%S~'fgXv >")Ld&ƘqJk#>f; T $b"bl*L]`.aD/8}d!ihDa'JXPؑH)$n`!()PWd'Wk0W+̦8ip۪ˋ_]*hSK@LKH/7Y b B圃 co_=[7k#'v{wÑ/ywoeAyt4"R,lUZzpNz3F!d#0Ny@`}JLm:w5z17.dsi[YQfV m8D*&6=ީ{ =غVz`rh{ E}F?^ C5oޑir SnܑrP06my9l1ۅ ˗eJNG: uo[f]2KQ.®AfVhn5vwbZؓgvUkk TBNf\”ONQ i4{7dUٿ'}oq?wL31Q)/\?_-2WeWm\Z~rEF -˭\t-8\+Xt?\ʦ.eeK `'4ߪ-Uvk52ڏ?YFpA Jɛd=ulbv dRHq0PT.hD:)+ >p&0 (Jߵ!ʘ1p'"BiNZAq=b ě~kPvKu}͵C f'C/42hGM SS 6[޲=?͖# l(T ~@ 9!PUxKA(.FaaS YVQBu$0zحm*=*ޟJ@/F_RCq09;T W.ca(8J B%&>* ׊ݼT0gq7y@gONlB͔$W3N{l"0 i~0 Kޚiȷ{>"Mel4 & .YVX)1- Pr~L0L!h<ءPjˁ-'jO}o=JF3a&z\ߊ~L^E>012{9*[fMʂoMGfC*u cA햿c!`q(a`"DĉY(٩`XHĬ .z,䅑cYͽ"qX祵nDtȝ.kTqg*YR9Rp*SUVb nq09;sqѭ:ט:CsQM\x+D 7MNm)R `$R,^J> e4:KL\| .Yǡ<xy:py# le7]iyF'A7e?)3b(śD˒#w߾;֧],tM.:H`c)эu#F֍"QgE>Njޞg@=u"YHR9)G ŃKT I袤L.$SM1., r2 @%gRK.ȁݼ6-T6޾ JǷyHf׬>S{ai'r+~|)]rLU'nc/ۢv=q/sg%k bޜq9Do ڸN]ݞ 9B^3)qrdǑE76G;HAZwd5`; ̀ie%0 "sZ 4l+ P"EsQBʾdB`=,) .Ep:]]?~C |^Eϴ2((9t(J QE3@_jZ[*%-x5х-<0^.YyucnV%_zy5/˟dHI~!nq'vb nVN7SpV nVZfUiTp n*i QCbS6+K[UU;U6NW(Q!\UUK\UnbVN)JCBkWUm'r]3ҠcUU n ʀF5+lUk\Z+`츪*N:A\Y - `#U\Uz`Uiĕ&737S[6}=C2実|׍~tyL'tgBPg*g_]o&3n/&yP ցn"ys1/T~)?g#&Ԯ2`YN sy6C]zȅ9e҅s9_\Ş[rӵ "Fnfer,t+NS .c(lZLX٩Cư䱬 crd1/0LvҬ-fFhHH^gyߟQnY ZWD zg:Sj/)=En(c$D3\ ͌WzU%ɝ`$^s- <9E´4UmWVJ3v\U8ٱ RLȸO;^j%iZo?jdYf\ W6D V\UZUUqUUZ;qhRCbںfpUfpj 㪪țpu:RZ ]`mU3rmWր;JNWBK UێwjYp\Un)JknpłfpUV㪪4f h'+,Q4*W7 V4`)&qek514}T/Hi4u4{\P-a+t Z;zU"S4B+knwBOuٝlh+&YiKg:,Iu|d$1>EN*glc_,)h,9"X2 $fm:F')y)Ne{0B'ᗭ_?jvoyqs~_|~W|t_y)|Q䷛J~/mo#/K6Zl,K:Mų-! v-_LoSnzd- `r_\wM_亝欢"M쨝AgQfpUso)'•ݱ j/NfO1~jW{8V섫C^Oڝ+LJ6*E+j~* N:A\)elCb(E3ri츪*7'\YA2^i'rZclO% W'+tjW,kWU{ZUU'\"{p fprb!@jqUU1qe{-ʅfpUk!~*]",?4uetJԕM cUMۺiKӵm_OhZ3Ju}̔^LLlH6Ti/u G^l/ud&mK@K)SUqUk&Z}WUm){8ŖrBY$~jsJ;BVW{ 5GSKGZJ#FN{&\R{5ŕ\U [UUkp츪*N:A\ڨpłfpU* Z=z\UM:A\)IZ]`s]F{]FSkqUU)j0v\URM:A\Jpł+ ]RO)*a5+LN"!9ZƎR W+$%[`N0XfZ7~\!ĕnF닯+#['ӎ+̨al~MzU}1zg^fd7|ugUg'k;&}H˃?c[WGmv1Lx?~6s 0_."^_@J]?>_]Y_9F9c8Ӹ>]iCpدϋ\NjO[;8ED˯dw5w([]Z{h X?ΖWs?)޶T%_ob {?z,Gto^eK?0nV_&#/k\]×׬/rx}c[{UpRM%KC*y-iCD2.4Qzg^[\|t*o^7 ~TVt|P2号r(^F!,&9TehEp^+H. &hV(ksI;6S A%e% V->*%|l ĤIb"=+n;InԠ"wLCZ,Ĕ"pKֲXm_D6H9Ef|C`Zt$!ڜ}l@IsVR!En 'l?{WƑ% BuK|1)k/$BlhYfWʗ](Gbha5rݾ}{:VKc8W[Ƈ : 3J`dsWj52(#jKwX̘ơ'T34p')U %[mU=QKʇ< PфbЧ[!S (hVJc 7 |e6&PJgM LFa?搿!Kh*gr#Qc*LV.<D @#6ֹ{*+ tTMu*L%;Knw|B!wy(UU)^xNuaAr#QZQ=—FkiJqN>{"Bd{r5nj5Ư3Zk|1!| mt:$7'XKQkՌ З-1ֵK/-@= y@w~ R5ٰ @c̠jSr"'$2sL`ݾ]DU0Uz̨&!͈ ǀ 1=*tT.A0Jh-fFVXY4X.̪d!:T?VFjΩnul;d—jQ$ Pa%~^݅?>Y.R)\y]+tmm,z4%\;~ d?aVe FBM ^R`3߅}3܀Di0ePКf@rԬkdž@A סPVvLͪ]Fp2zfh)b<~530IVQZ|M!jPk {ሬw3`E yؓƵJC7v#a?W&:Y%\aNv Z]SAv = >-3`\߼]Ͱc6Ձ'XW4': 5$\G-֓1TTyô*axhs+Z- -:(6#<;A lU(?<46,1Mǀ4u ml7be+5fzPj@H5([ 1j1j=ڔzC󞃊 S/¶G#.@N>VAP4kMyaA4X bd=pe|]HhJ7a#QI45`=MGCX6:a&jہ7 NC`Yk6\WaKcbX`XKEw,*fh7pǩ 0pf@Bׄ ;:Tm0BI>zgw[x#lۭq!R\iE넾RԻyo.v >7`&ϻ8e  6|o9?9XVZoC\6i׼^_zyݾYˏOu9_;VMwWH *A4OVOjߴ_~ݿVկN?:lİ ׹bΝBWHWƥh ZdBW֚s+FҕX b-nZZ{fQR@rEbv93K+Fݹ b/2H/ K+F{t(\D ^l.?e`<޻}WQW={{{׻]zvܕ3WʬCՏw[Fsȭ]peK)j6|;FDtX M3\K 6siF_$MǨçt^xY}˛Vjݖ ^lK mcvB[)1 cmkMc6EWպ4_<H?uÿ́/זt=`h(~qH(æ2=߭vo߽ëh ܕ7&f,B_-߼nddJSq%MCsz|Zɦ%ø>9m@NI\}B>i+oP1L6'?7ΣUmmxFO2[ZpTKu=YڔSdi)m^`sGh?wG8 t4`_Y.F~%+RzR/U!>eGρ=}]'uYDBW_3DW 8BWs+Fyf[҅ `bᚴbdODҕ|#b).Vk{t(C@19DW 8zK+FL= Tꯡ+vqAtŀŧ5/-vF ^"]d[ҕAj9W.R곧+FiI*x~J+WJ4䍱!ѱ)ctO޵fҿ_2({3^EARn#bc&~ȶTlғL1[оnsp`2rN"36D(DS?N"0ZNWR{oSO/?ȓ7<}4܉h]IB*ifc]%wZ`OWGDW[Kxqɀ!K_So=OIR>Ǥyi3iT UtF" PfKhPG 5eZlhi@Qps`MIIOpY6z mSJ'M'Ytڑ5SKU.truJ(EOW_]="8u0#>\~M.@YPvmY4+SӴQFtk ]\t.tRuJ(P9ψW8B-"TC'M5CKx*=]#]Qɺ_4U.t*uឮCR bW .UBq*=]#]q6Έ0<JpYU#P;7CY;#+}'<85?7N@PI O]-Bt|ݸR uuj>w2˯W@pn<8豳}/UMd޻+Rm`@Ls,GB܉Ȥ(3tӫb>,T-rrv񉓥2́NdmYt> ɭũʠ?.Rtg%G$*i@y䷲A4)zz6 PW]I BbpݿaNK/~IP̀7+iaO]j=ekKזvJ;"k =L`32Vofc%LuXK(#4$S Oy>WdOhU"J=]%] kKKܙr9$ КnܹA5iz;nЂ:ou4RqL$nO_ϲZͺn7JEo7ݐ(#J3ʧ \*%_BK(ꫡ+S⇏4l8 -=y/Pm@W=H)]`h6tlV5]OWCWDs+Ȇ\uju"tutEeDW) ]zr *y Pj;HWLhAqFtNVmU+qZNW =]!]q_o$c]%BBW b]Rd `8Jhy*T3xt%%5TD 2:T "%lkvfKIQNFe,e6,j KZFPY:Qy,* 㵽RN2:ַͫYu.5RE{U(f#+h??-)7 [91gڿMɠB݉VѸsӢڢڷg:+Y O7E2-:n-#yb xb#'RuO,ľO9!SS&Щ'O4C+< ݢ+րXOWOzh,m~j\rVUBIOWGHW uFt UUBDOWGHW)Irr`3lA@:]%w/.|+K+@ϧ+@)cN '_l" ]%螮d]%*P{Q.th]$ˌ Sup¹Ud]u޺*TcWGIWQb=0O\V +>/[P?a>2 RkQ}-YJr*ỺIժ+Aݚ_΀\ܜmZ\>ab4 wzͻ?߽+#6v~WFG"g4`[($TF K)FDoAoϪۚ@4;[-URiYԗ4߯w^o&P>HYᶛoY p΃/{ z8խ7!H׿}RG!d~VGBFE"]xkzz݊;mEj 63|6ɹ7~o?L:s}rT@)֋9ß&fu{vk\ޥ*X 猌Jtj%Ϲp!q7<-[:u7a1sZXt>ݦHJzDG _4>z@&&|턐Lmܻ\4OⴓG \hA`rzOٛc/e~NR3C`Mzd0PLP<.SFX>H=tG=孚K*Ljd AXYL^j D b FрG!eLDe58)n|}v9]/Aߠ$4Oڗ>I O9ƤnEmZ ʖ*k#EIUDy"sQ#]g:kYߎY 'bbS(GXܹVd ApxR-zgQnjL"^ˈiDa ihMn Ξbvs9ms]a.-hbf)uySNe9OQے;wopQ$~4ۣ/G^W=\uBm彽A6a&mN~PS\L.JPl*]TwW9?RxdwfY!,dսj-xKgek%wCUmouww ֯-y W-Fٸ5j+v(#5ϭ%(H%V|_>WW_^;:e<]hq>R6j9l^KUH BiAN'ҴRƷuo ?m(OJgY|)hwM䗱7tgrD9+mȱZʄ2”A}fTQ͍QmQ'(߆ofŹ.ree.́\80ܧ{6ìXVxzQ-?ץ[K/\`š ,yҼD^aqc}zV+j D'B YN * 32r 佡TvCJb vԖi1^#Z֕q?4A)5sЏaāYG,:━AUd-'C!kC#&&pXQ6 uֽXőp.uXE0p*"M*%6RDh 8?@Bd*HQty;xp y;Wy?.w܇b &X.DZ^[[DH(򅖜N)bxЄ9nݧKgۆ_'&ӏ첄˦]9k^K9[Nm5U3Ϊzd4zgauTڛT2I+\ҁ,_aKQ cF1 )`i' )&-՟r@q:HHh)C!D rl Q-J[]hy1pO [oҡ/[YaꃜWNq\>`{棻@x_$;<:9с# Q,ߏMu3d vSfIs6J_#4{;Bu.Nw(Nwb 2pUگKga0re< %i N97(dnw;H +7bCf2$X9QYkLh8.u4Z8rxLdHg̗(<'§UP`cUnY0Ki[J/7W^KGV,ΗLߕũyFFR'cbf i-4c֚DtY(VfƂ{;{HE 2eAY>C6X(\Xsҽ "HV΀eVHa<ȝ* &,Y /Z$Y;h1%/+Y5qv֮~̬U6E]) *'!G86^jmiB{XL˘8 4 Y1$-2,;lDe)Б;z"H8+r=ժFlI8GW{])O)Dz3/ƁE(2m|H."/zS]^667ߖS :q-qݖ~oR?\Rɫ[w3ވdmDTAi{>&o\,5B@@'yށPB QK[[7"$Lz㫦"l|ˮQIZp󲖖KܾZyг^xr"W.X0x{g+2j{먦!ZlrZ&]-iHY|͵D,`G 6k{'`UOڤbxyz>jJ娴oN{sZ!2EgTNoܨ|tM&j%~RrzIO QFcmq$QTB霉dMY1V_^0jPhns29! ѹ>~^"7Y7wⶭnacQ=%h-Vnoi=+`ORyV|6%.C^>{C[~L6?'WX fL଼kqۇ'~ҮHd#ҬE+a$+gY2`vSYݗ[x-mhYm(4g+ФSBCcsM^ݖ%}_' Mho_l'Nj='-.O|P˽9"oftt7d +d#΋7 m\/'+Q/&㏄˓׳/?z阖>4W6takL5џ*x M=F CK s P훮eH۝,dR),Z#YWA[ALnkV`7KYu,ZFg ` KB\)"R)+s9^001o+dw(H&~F$'zEK LH<^0AD녓1/ I-r4F7%޷zbtFM (,$}ּ|J"*X!tNqt 2k% ,y8) Vvβi5=sTT^6.fĆKoˬb9ظI;+ۏT-P0`~b%[.jq>6D<X$(J tf%.hƔpm9 mFҢ"SxƓYg#!5y4 AZP4UD\  Dy!y OEOoj!xE@K!ɾ6гd9eT ٥Y}'o:c^lvWQ82tPEj{vni`!d%Tċ/Ft:qøGVK$ ց!Ppk31%кLDz_0U9]-sfWK 'Al5c0Y. Q1/?_^`<~;7 y_-?!:>\ģx6/N>/09Y3_Jv;b %ҕ_ ͅ|?<0m*~ha؜t)NLO<6gB[]>0:㵷j n'y:>a8;{uڭ9 CRO)"tyX ^m{|x8ҝ"YD|.tWE'iOy[7mϴ|2>UiqfNzᲄC,f!c!Eg3v30`'sd_.h%8݌trY8/.h? [}ą|ߥM2):!Rou$4\hvs2  8`LE9BF,)?v DύJm<3I9wqEچqr i ccܘ)gaBnhR4l4J p19@̦pV;jFcnƻ8b s5q^EՖ-] C|}]!jfq3'1'~**>o#~ ˬ1KBfYKXD&.<$ߤTy!>VϾUSixKJ@ tƷuN'nc)FXMr5GlzЫzEc>m8Jo5طŕ_Q"/UxD"֫` $^\ 묍YdTW̰L"cې4Ddsllt :3g={8 ֮'ftV` {O=Cs< baZ-;EGR̹j)bCTIF+%y7K1; eaGI)fuB*.JWcbȲF\!(e@.I})Q^`'RuL=$Lۅ\WmΕzF#%R.YOlP5*)uaJ&hx5W;qw'M+w,qkIc wG$:r7o׈@q;bYT@ @7~rrJsܳz-eid :")D*j}a<#fᢕH$W`@ !; >Yf@E:ݎtOw:h4g,r"EE]"X&b蝕ҲGd3X=Ynu}?NIUx`?!{w֍R20W2T E)SU^3LuKVwt+/e_yy)aZd@' h+bg YYh%9dC`wgwQ&33Ad!Cd$f O3PȨ0=wh;s;p4HrzCt_i3խrLbI|tV0&q%{ ?_yrۨv{ӋU{~}gO;jJoa6 #Ex1N Lef%SQMm*&?_Wq![>`nx~8SU+bT>Yn+gmi坪͖oiؗ=s{..}7[;mq->PܿfCv$s[zT;6̵vPw=^Og' /j~vpٞ\]G`? k@7iB̀{ǃ{Rk48khU+bmF/ZțS*M͎YPdA I-(-2aNbqU4H˼ + 0v)Z$xr~oUN)!i4Wܮ r˗ ?yɪW?}7=08ZAtv_٧[uoy'nggslsѿp{[fugm ɶ'!R /V=bZbaVj6=1|Pi?8]dp[Mh>|T,;pe?|7MֲtMQ(!'y;oG[v0J׺:_/,C& IJ1ʅj( 3NHRR{aIj& Ab=CVӾ^?ŪAxqge}qG1IQR g rz~uGgG_ܪ˶&WJQFM]Z"JXd.+ώPKrHSxH8+30SA&_$ۜU&XJ@%7BORbIsg ԇyYZ,Śv:eky#W`-r1ω ٯUF1&I P@6A! d.\Wص<9hw8Q>vH>6αV>|mam-RpLwP`c}ra[!Y"(,j@xDa }F=Xt}&Lb;undz:b*!՞ a˵><Р lNR2b('MȎ`aZKm*sWngܪ(Z N) g2i9myb3b%ܺ3gs6(r@[qwUBwK}=N0y}ݓyS׵c`vGC'%]x: Q&1R(˿eG=8/82%K  zB,E+b5BH8@ uƐثaH>YZ )*AH9E uYYxL_ViOENG7M1Q 4sl|ܘf>|n3Bat71Mg)Vhr`M=8 } ˄fv7-i8^6YEp{,g ͞Bs~Fԭ.gU@dTo:`1QdWM YonrXYngYBYBϐ%)Ԁzr]IR/CJ3r6cP˶{)jO9j@12HE`GYc.]cYynЇb o?4U繑I><j~ >z'Q8D`%_NjJ4(]4zѼW(Ԡ^4)?Yv[ԃ3͑'o-4ldC!F1I=[[=[~'+l-[Z\&6'meu%#us(ڠ}zvUcDg5r`7kإwE/hAd/$e H| Yr%V)F;Uʜg}~<*^Ǵ s6a{ d,/u*FzJYa1k| X)7[Y\ ؂SC^dH"uI ^MAuglȍg2ʻ^,]~. ]oG0ŋO~nt1gx2YFǧ.sj#ɘRM{]!&3@j|Up$$露TcgCkݮvneXe?0B `Xf"nֱd9U@y1.Iˮо/-Mk|t"BlbтQf&b'O5Z71[Q ݮH\܌;b7 ʟ4`rR/{_{5zfWM Y!힎SBj4lWP+.vq>GuװzO|^t7"6_ZHg66Q& "YsIz=~z{h?TV:[7ܺں~8G5]yUCT2,ye-Hz-jWiC&T0:jnɘ} ڌE֚$Tt Es;aoƃ{?;N܁>P1DDS$(K{AE3{Z~FJ+䎜b*r2<:[gLZYglg_}ܬ'uT>}(QEpK2;dcLhF4hid%=&:&BtRlg U# wq"W,* $?[b &RAK!f\QVgP+^Ng"/!IDY2Y cI4Et9U?pt2pLFg:v w ݏd5{-Z.^7 m|$$7. t!d#HW]w@QۺmT;~ѓCqκSv|eM*iKHY ]o%7[- kwmHtp'|)]Lp{9c3AF'b$KNIcMuWYŧȧ{%,hVׄx+0A;]V#.n)5N&Md{k@Nl@d{+c@l>eGwR4Vy@8)*' y8sW5 IvP> gnXfGz 41TQΦ U֖(#:wb"ΙNтc\+B7b_YLsjn(}NׇoEr8ߗrn[O~~z @{2䮔n̛gCh&px {N(mG:gS]);pK˄3 9j1˵qC!4νԻZՍIIpg6F rI%qJ̑3%6.栙̚uk;2=̲+\i+SL(%GRIkk7Pq-qqGi'vCI\71OvKD8o1' V# (\x!E1RkFhHRɬ sHckW%O'q 5:vڼ썵#F0ri0P0Ok*oYF"JɁLHΎ1bYi[Łr\69W# X:K`,E-sm]K.ߝ-1\KvX*ptA2[KLvѲތ^7Lʳb@x.tuY}mULa\ƀFk{+YJFyk)Py:  ;o@vI@-t$>P$bd2^<30%@Zc 8C غWHJ9)Kn}Le4N IT V&n#!3sl5q3e-=}FEnf{v6'/l+̭v9):r`٣  r@Z[Pdd3(xB#Υ9@:&ey9q2.F E$V*g\AA1J\Mەxֳ@u}R9E;0e>glt:y@}O.t7>Ko|ׇ]3Mo|pf϶TߎgL2DȍG'y9 Uo?PbuRR{w.C)N+kS`"$x:.*ºIoRY}S˅`Q ѢRc*0Ƅ%(+ 5eBq/0|Cw!9h4*ҤI+1 e}R%B +Yr2h][U@yҷjVyܢVdtBk0@ `mjjTޠèS+pkEqxk ڣ}0!Q%#ɡgQ[ Ai}/邽QVY7+ M{ڞݩ]@jXBVK5xQE/A'ytq r*AEM!荰) 1RT[:!aj3 {w;;۾\})+0l>ʘb !4|nѼR“hi>> ( x_ҧ7^pSy,?":<|__sL_qv٨=6tPxR1v귓'5 @Ov~h.|q:~49xݲWt9(:nN<zE!gO7O_/>tTЗGIt:)v~yj:Ib+6wR'ǩzmW:B;O/ '/NRx5G~Le5SHXD{>|yw|w:\\/Ɖ4"n,uZ~y4O t;yAj]i7/[,\BX'8I[:{tG?bEF/OP7z{jJZ>)+{$eds0JNQF g5iϦL=R&,ĸL0B"DcY\ ԯ8<'&Y ,͠Lޕ$Bi;ea 3/45fQj۽CZ(2ijmu02/2t1[(i;<}E b'ٔxzHEJst"I&,&+&vxu?}.ϝqwP;AI݌_E\3V/}JZl%/GEtZЁ'ۑO1׾0`e)v޻ob-=ăOn\nтe׈RF!! bˢCv&"ĂMvR߹[GXfk>&n'˚%i}M:Az)drj~= ']CffPy  v=|`}A̫mDtz8cbC6E֖㶃\=0 kd;2bZI1MuհE^]tf7XBLD6Lb)֡k+|П6Y;z=~K%zK RP:(&'I=H26F*|Nm& H[X2HBXT(gХuFj3q^BCL{GzW HԞ(]J'Н0' 1P QkmXKq6 /ۺ,:Ƿ֮R[kmm-X`+lઊK4D#\}pY{N LW,x.pUUgWUJF[`bw Vkwo'o9|Bvb}BOM~o|{$k}+5J+3,ß[ss}1T>YY O%_)kVd>]0({YEg<\Rs@?ׯ[aB :?L../6nVk0++JỽBxY;Ȟddwf)9#c]QYuE<7:[U&w}wQ{[hQ_Sr]P:`6"2V3V'Wut.\nK=oГ#,:9 fѡUW:yԻ&UҊ{}-J8\=O`/ W׺IeyR- >W4\U_|C*%W \)]aW}J*nr|UBym~<-b92?ZGiî(2:;L2@/엗ίz^l`N,6vYJLk#Wo_󄬋?.c;!_ LZXtϴVڙ2B2 K(Ng4w]U\2wUivҍT}PR9oμ|?n+EvXau}@ BER `as׃5Ku\jHR EIe#[0A0Nbi)Ԥ xђ@iU).g5EaƔHBHdH\$Kw낲Wg]G!{HݥRSFcy`lc&%d*:PNXJ"D~mcodCAb65ld)emwA`RQ6iC,M 3C)~LOU}gW;23Y'{瀯yձ'f^U=p\ReSp QȐ2 $3b(QJR0 M.zTB5`"baZѸT KϧV{)k LWiꯕfZF,<n$v}SEZ >]}y/yB+GlA.Dœ%. T2jw ЗJ-be3M%1:.Ҫ¦.ZSfHE@\DtغZR+qF0-s0 vڲ1jGhz|#:R 4FD`5b`ӏ3Ξ?oM| h$P{QgXWFđ1?!)xLw櫫+0A+8UcDT#"[I+18c$Xe=94 +3vzeKjH(r-zFllR'T )Dl(X NBy8O#:0..q i#.>D(P zq&0/%2Xm}ݥ4d#.ʹPd>(kܞtg{UɀB$$ ZS"FfT["lP$n{ 2ː%fEyI5@z9Y]ZVeWr瀒 vsY"ケԙ 9axǀ$HdOCdA K緌Zwe*]Hݲ.ʱ> \!PJkV*!I9Z}Szl czm=@#a:c '|Ҍ-C>@.(AaLOdi?̱ނ-b:SHRʩ $OlxL=CD69nerQXyoU v鐙{w޺_MeǗf̎\ufEN`/ b4)вfS9XrqVyf^,x7Hi1J I$EIQIֽ̖μ(^;%!YDP)Α Q:CkC“nlgz+0uG [nʌ 1K+>a=ZYIw I^0@Ku@/=p?dT+ 8bR3 LNyWAK!f^QV34Iv>acRR P/!^fURX%2O>KsFh|}s9f:q`h%i׊HqKIDՃ^IO1ŨQ ٍ2EB@|4ZnEe]N=V*rVƷBJպJǛǚH{QX^ˌw4v!cLCƕc~7:F?ݝ̧t): ~Wۥ;Rz"QCQ c (Բ}bzPi~}U~P.}DA+L B\1[؛C4`w6A(9ifO+ux;ݳD^Òd*0GmbY&<ɬ%Hmf8E'f7tI;.oݡBR&z>Onf4(NElr# )|Wovhu, .K<)z2R`ƨf4 (OԨ1&Q`K)Lhm&kP1%$Gv)&s,I: E=M)+CN*H6g i_~ w{U?.hyQxr*RYg$p$R` FFWґ?um{(#91h^SIɓF! X,xL bM[M`릌y#->h,C2 ɔQb !pu4Č^yO`P3eu)Y1)]ǚ y:0_huޙēY5Lc\rC @?E9'IXHп`ֱ U `@dh1RzgSY^haβ\_K9 W@ څ臿^Wնzwğ_;2|O?p*'r_?{W6V>Κ1CUޞtOujfIL*Fܢ ö$-;$ މ;zetԶۄ/TEL 68aϲZ!g}-ǯ^ ދ&v%\He"]ǡMדAQ݆qщHz Yez+ &i5)Iv@4hk(ֲSV~`C r_8t9N&DFO+n B#X(.Al:i{ӍLg'=kEO/y"DD 6*sq |Rz#F@hy/y* `J"8Yr&}+?j k[^WtէKs/zОty%}[`e74/0+量-4C{JRʤ9/,R}\~)r񡀵HJFLǮrWӿXHS3v62.N?1+:gќf]%.j D Rj0hȶuP_\f<;n4iu-}M |(/8D*O0uct{I0DsrcS eQWrs5AeS ݣ]u2dԳu ; G= G.+ BXT͝jn3[*9juٍqD\45?&8vo4^cw+SSvLSz S81 jƥ\. ut6gwO.s# ;&ɍ ű=f^ۗ<{`ߌy&eI@گ_OG_lgq UA* 3h "&S=J-b{.PE/:zix\Uٖ! HǕ#;Jv˪=j5P  p%4<@bX9]u8M; ,_&]R+R[2E .@,z0;˜??+`Յ4ՓGD~1F: lp6HFZ{1Uږb=D~7k3ָsFӽë_R˘(z$KKk),6-px6K+*%LK6L,/a + KJo+u7jb-w)̢̽uO=ҜFv@ c! BjC(.ȃqanKR1G ƭRȰ, Ƹ#aREb?D h:x9ۃg>%16>xJS5vj 0?+yӇFݹsj hf7}a]0tT'DCmL%V3RRR%F+=s'Z~="PMTX,HC iH˜Hj3TAv d>Q]QXt#f-ul)&Ā6H-e*CRǐ*92^!XElLAgNp:[V3.:w"R1姈@h|׎,x5# ` ICe= /DeQƽ`^"eb tNu 1< $` .KauW u.Mu&ԯy-+#sP: 0Th|hŭvD,;-A+̷U 7_ZptKZ|nȵK^G*#2itL*poRafXT@Jb,#IYP8@2}j/-?օW8C]fOJ>:$Q-&`&2k=YeEċO8hAcb=wO,;,Hx.9dO fڭfM 873&yHH9)/d8U=33b [t2ō՞k)~12?`-T4W)${#3O&ʁ+cCeRSǔ@B1=[@'^t&ϝCϣ<+~wD< %׵+9?.NoJcF1K)` )&т-u?`G q:ThBI$41chHQQtVn#sj@h]Z> rxC]ö!(?g}h";xX8*g( Fzs9N1D.S+{nޞC--&-@d ?נx[_ʿ~7nzAr4:?/m-r3fv>W oJ;TbhU6_HU Ӣ~z^;u~a)h6$sA(.®dNRHD;,$.9dTCVdY=:d::dux`ʸ6H ,ZSBO/( d|pZ&Qv= .W$3hڍ۰QWI>xytjd3؜˺GH҈t?w`~X8r\2ΥT'{^G“},Z7?f`Rxj La NTp*-%z;5?{ز&4沯5\6t fw9_I s[|p{\rp ALJQ:ǕZmbV3e8t@:yiac&PfTܥlrD`p'93rsv3d2E۞,a6g8%GLԽgy>T_]#5 $U%̖@+%Ѝ _c=HN4"d4"8܃!|gZp ݞƖ!D!e!x0|1c2b=6&(ͭȹaI Zf=KB8{%, `ZFP Qp60sן7y/*uxƌTCLKm b5UU`d}42,u#]&0>QOv‰թ>DdS8 O@0tuͽlL[2-n#R5ߗL} Xd$; LmguZ]w4̭jI]uǹw;[qlz֠ތ%,c3MrU+d®`~ꢚ$W]kUsuá6m7&r#iX7Eg&5N~Rn!qRDw e{0jˠ> l<`ެ֛iz| +#/o\)~usv7Y oO D".5_]o $ǢCF>a>E$Nn!ޖ0}"mX wikMUHQUDޢaj_ZIɢe- ^s}m77uz@#%m,St9~0{TşIq>RT\Ln}՞nQɓ$:ϤӌF}R`ͫ| m;e{4ki3ΪdJˊnS:X^+k)*5%܁͢+&9NxQu|$yyW],˲ɠd"oYz@ 7!/wA%E׻ si%Q[>!twթszf,7!ƯF'ڈM숗W^̩Fc٘ڈ-Flt|F<#mĦҪT%rA*5_4U҂ohLqL{-7^LךȚb.5R5ö; pjTpV_v=8?=)6OqF31唪>Sn,潭Ҹsh9G%Фrܷ`M )|洌/Fe[q?UEH0{f-bԓZa5V~x;;PI%+=U pc5xiqƸ3j )+OͷDmR;3Ĺ9J| ;9 RKh"1).[ KA߱=yP[y :Nt 9o.=$z.>)oo]6aW[-//NجGky]sK.5Z\⦎sxzk̰v*}ʅϸ0?ACp 3>]'i?Ϳ<#pf9wZW|~]V_jOWŧF'xo췳g[@z}^W֏rmPfXz~raRzT/$-}=nԹ,}=*@L<.}}K_h38Հ׾ZNW@;U ߐVP^z 5R7-*0=0kh͡ nK㟄~z8'U9 v8} kE@nUedܑEп>9]Y *|zң4y仳 u3±H~q~֞[- ]S8mCX4eb0 ^Xd#xy҃xSE2CQϦ~{c6 {(=O] BW8Pqȡ8 𷸈t?]Wej'kϥlP=]#]=uP8#c;p̅V ;] I_͈̆\vsn(ʫψo WWi.td@qHWCWҜjՀ+Zg~jG3xt͌ Հx.t5В;] 8~tTplگOW#]];6=^b֬/lѩp^O_v {s3pgcV3W7K/>WU]yNĪ0KLj*X@QYz󒒆QXUSg)XilƇ7D[Y!6O!}¸×{#dR_R1̜.r96yz187YoH ,9P9^;DFaU̒~(q+y+cm܃{v,/=K\y駓wCk_n(+فHWOzk] 3+ υZNW@ YQO3+g0pYBW}+v瑮 ]yo3+oSՀҏ6Ҿ@HWHW(7#`<pc ]c{>] v瑮 ]I] gCWK΅ڗZt7h7Ȍjf?pυZ{(HWHWVs+V;lndZ'a (tut#b ^ڻWΨX vnR;[6VݛYOZѾޞ)>na~Eva.hg0+3Pn<5} ԩҜJ1n>t5؜v(Jttgb//!Z3(,natLt#{BVf"]7.~/A1~zs?D~\A:jD>x~Mv 2CV9 ]ʧc#ŻVO?ooиt>D~j'7VJ\gԋ}b ?;]EJ?d]~nn}m. SĽi"P0RdqGU]c#^Ǘܣ>U#;nْ8 E ^kJ`~ABEC79 ߥ?t9K HWmY?,{Wo:kKKI;l_ Y)x>I66>C{~=Bqut=tmf`/'g.z"Pɚػ!M=YVgVV6I痚2YiTѢ m*ޛT jc1\]uM <1vVueH~ZhH݇R+ME@DZ1"X"'6fjACVb SNdZ(\ C-QͨsRsمhѵU=wgTbwR!va2ٲPR")7KU-pO="P#1ݹ`ИKP^3Y*TL%dB1מ၈f{\iuKmhcsm DZ:fmqt3EPk1ZX4k =lC$; j3+ioTJR<.U{Y"[BZ*Mjw$a5^-0y/^q6/ގ*A̸0ĵU 9l8=M?t:Y?gtL{ur~֜3I<6ި ]!ݸ)XFOƗauh0JvPɔ(u57&QzI0ɢd5AYӄ>o;ОV+c I y 4tH5iCj ޾YXjZe~ 2kâHY9. ˃LT LGPՔqe ow\_| j7Pٗ9%5@X v4O aD[c|;=<5!0 "qp,1 %8L*p+_A4N@7G4Ft\q],tq:*UҾy?GO3_i`yO(͆HO3KHy_ə8./~ï[. Ϯ& 8~[&e^2t _(uh\xw?͑Ea٘pUoǕ̲fYiV3=+kQƇ\,KjNB)U>n$l5O1(rVM50BϽ"I {;ܶczJVZՒ,ZAGV85VJJ3KAjIv!0X6N)vɝ@q}o^f$J))5ƟJU>?P`+yox?X@|"\}3p?4cU\^XpGk^ֻzϽlIiaz ;\-%g9VC=++ \PZcPJk ^!\ -6Aqvd4;-yp.fp;<9~ Cpx>ͳR-ӏf=׶f[Uw{ֿYM56Y(msl ,hL [c' *޶vJ|5IN?O}́赝.u}0ՋΟ Gd%cSK`9kEViŢbe]т$}J;V#5G7hn`*Gj&ox nRZ67ܝm~UfY~eYye f4,`Ngea]8⤕.~NgZ w̅9|uYS3:G=߮NcǪ3 /KQ)ItG'1+'Fvnhz:Q>b!p]nZ݋Yo{oL˭(hi{}YIe>,ۖM%t=-)8 s2u5{%9lŤ%dpR9!IW->dXL٬:5Ԣ|#&TV=R;5ǟr "TT#Bj&e.ulÆ{|c\[Z.p*\\Զigz綩xTm:_\OG۠mA'{pz;a9]7D;dZ*&%8GDxfR$\U->V0SfZgA#7ƵM@Ui*jOeÍ k*Tؙ8#cwJw V]iƞX(YX(abJ"=-ߋk`fW4϶? MƣGl#&vX/qf=0D V|tQ[%wAc`/dpdT4*[I%s A SS,Y1bw&Èbەvڼcާ6,w\$T0`ׂߪ L? c(^CfE֪ ):DrQt0 >1W$BP]l:3q6~,\;ӏ}Qt>7.qpjlMњ\URF&| [s`e5XE.pk"w{Z+0l31T1`V n#bgѰ?|;!UM4M&VuU²*;>S?|GqxUU廓t! ü=nS(B|.e+2dtM.GE;~P;܊.c:]($>fΘcAЂldځJ%.PRسzgϭO易iP$6_}үaҼ?콺כ̂ʳĐBR-mVv T X5FԼ?$vzm7::`5&$4/\!/ b6s/&~`P$6<~,9rj'$R7]f!Y%:dw=ٝdiZkI\щ vNQ$f2^<3Qx%)ӚDo]^B Y(V, !dTFN"wlk2pg Vg/oϚR2mNVWl_9p9;rd9JF "[[feJ3U 8np41%e1ē`+'<'bФHRMT9mZaRdPhY[f%^4Kʼaю5L7@THI=j)SMRS񖞻JnT~㳆+ HܻRFWRVD2h]ZU?ykkK^fy_;œz8ɞJw|M8KzocU(+mA2 )X-Sɞ~}Ó9d/7da 2ˣH"B%fFEلx [X2a00]$㢳p7JޢCIY( # 6dʪwk-oLDW[O>:[WqFH>ެ=]nKb;i HDgͩ I8hSI&6YIS aD(Cox1<4g-fyHd+I 2XD2>ʹ6) 0u{V$ Q.UMف-ǣzMB1uIޓ~+mгdD2 ٥^[NtFke݄[t PMB fa;^jÈnx :QJBOnuV1Gc€A/mJJf`8Yt9qZ.S Ze,S,X)X&ygcJu]yؖlSm.2|2 2dWt+!W^qixH4τ i ~;~ N]ώ|+_~EunzN$NFo^sL70}76 VGG4X"=8s6\vϷs v\QiIǃZX <9ћ/gx:w ߪ'Il8K9~5N ;6/'a}Z}{3b<S׺j^& yqh^O`5oqՋ/uG8+jRN _yOmIQzA3~Kjv@cL#ۮ1~^MZSv6ϦNaP;y y-?~KX 0+pi߮ <{ABhW`ߋ]\TFmۉąLBIc|@h)-YZ "{tj.t, 53#|gzzD ][Jpթxv57Il:ب՜igzH-Դyxߐ;}u1jn-[()s_ђ67/qR.BՍ]W7~9JsUY|MAXf8]Z"@T *UVgٵ:fjZZja{0<%U%C td2RK1ؔ#X8O:#Y3c]5YWg[;Ϯ?\ohqamU۝S]D&6?@<% ~ȳqd&- 2쪲T)p%* ߖ) e;vgP& Ert:X 0A0E+`!9G!Xڥ|L|Lq^iroQ牢kx`C R@(ȢQ1 b c4^*|(`bkck h>H8{]ӭm']"]NHvQǶk "D'T[? Q'5>/MC:_a2r?%ۻ&gO)=yb.]'YsҋZ2$IVRJ.= qV<9^P'G uv:P&?b@ #JCi{ńDcɫn9ZҹZj۰tZ=P=X\9sR _`gzt<[cū ?'NO4drЕij/ހ GA3ۓxFqRp6,+, (e+\pRDC(Yc(8Pej/ ]?ɨdPXe,E=@ADK_  hvcJIK XYӟn9otZ64)M>wR'å m>`<t:F^+9vDF뛲nTL7_t`UҴIQb9*UfI[23XenvpwE)1.P^Gs\Yo3=d^BV!$9 .D29TsTH~fw܊˭0ȔI>imTW B+iR; V+euHH%Rq: YLR&YxlA9 ' s)#~͙id-2=2 dz┇U=_Lߐ!=XkR!:9A%)E)J%rN:K ޓV6[Cz%C67lG+ ߷|)i8[6u^d>'/)K^gVI)CCU L8J_^[z١,mzεeH_Rgm]zϵ9m@5@:yd*$'<ˀV  >pIbv%UBfL%U(c5w9@Rm)sjW&Ȓz&a @N D/ߏ'[MD0&ي%K .^Љe˝W^Gi^+_k`m޶+&aml{u ŕufoHkf5<6`DBZ X $A,AT,D9z+Z6g=Ye@9/KLqn,0 KYe{%"2Ԛ )5DMYqU6J(X{\"K k,ĹyRw'>ҵ|4~E߈6{s S]#:ZWZVK&tG`O>UoDrU6dqQT=qVl_N@ҊJK1dí-w@JJ/%k7w!?;SDm]R6Ӕ΄J^#S" x,Sr&rՖ玏bp`vc*@SQVp >rHRB*+cYEWqiف"#M##U"#]"#]%LarC'r:f e&dB%h$fB۬Uz,%v˥F}YdVڲu! o}PYN1#ug`NR^˞^nM .Ѹ+;{+Bf!$idPML٤C)y,GY4BP;0Pw#qQ?B.i5,'פ:xD""sZHN4') L@C:f&"m]("! [wL1m愳8&Țu &)zո_}61Hu6dT0-deT˃#CKùyc]ұ)RĢG~е/= +*ZŠ~9@ˁZrsFa|h"ӳQncQ0Y!TYdV*:'@ tsD\JxCfZx>F1F\vPLOGe*!Ձ V\>Pi #>Xx7oKIKI\ypR**h%=*:TF?Xhe $Q.˔BhqϓT 1s$E1n ΁ŹK*C2z{Ȗ">XCzzLd{2}7sأ#kq{CG/_]koG+f0)nK@Id|)q, IىV!RbS|n ؖfTW[ԭ[bobRt≡ QrxQ@Ɵ'G=9:/9Q;f) YZ)rɩ!𐅦 gӍ[}dܮS8>T2}"t{es~|iN3ef3BvV'M2 #XC]Y%䑤gˣdQJ,Eezzmxk]4T%O{gRgn-7g7F-wjYXΔE,pJYĂB""JI,0Jh|9&QvݔF]M&SJ2')/ ]'񨴸LjdOj|֣BQEA[TOЖE 1I,Ktڊ\;dCIܚmMIr_Z(^?_uo?gjsse|f'{p5not>yZCfj>{{ m:FӭN"Țݷ.w`FCrQlYx5ݷ9>q5%n~ӬnW,Fxo7ǝ׮=RN^o>c~ݻGb%^?~g歆K=&`g}Ky{-n&VUDa3u(V3Y ]j(h۟Հ( }V+jPrW1茻*pEgUAtwUPZӻW讴b]Q`&`S,2Yu9nMVLrBdcT^J~S_(w &҈OsZ|FrT]JYȾ#Ƿ nn6J>QW bwȜZWc>ƣtTq[Q o~[ 6Z/ů5 ",f_U?}^Ȅ2)DxCltgxC6ǵ7 Z 7+7rWXSwU2wUJvwUP~U5+kCaWnw Z{&v zwո}Ҟ]XsJ{^wuZvWJݶ஠wWv=ZR!wU[wp_ Z& JzeK s:xE_ >twUPӻWpт V35+ J Jً$(+Į$hJ \]qW-o,(ז/{wzܕb2!wUhapugUAk[Ϯ%+8C s; \1X>w Q _B^޲ RL[Q%K;dj4Ά>: !- oNj̚oҲ!UKd>^*~PW4Æa'ABsJ ́pMýfJ8g.fR*BԳLپg#,r2'a@d"KoIf ɜOe\C"R*A&4z}^TmkњFb,gdL8y5laa,-x3XUu4(p){ƣ1;xh (LfnQ[8G޷s!cv枷f__<_78i\ܾyaj}?g͛'?SEf]W `/ENsF6N)`Aơ!XlIibinyb48xԚH!O~~50q-~yY sQ7] y8|7qbs9_$hՔS'[XǷgC3?c/1v \e=i^a1z&1SD8pA8kUclwXU5T=K} PTM x.&#VN&!_a u6XlQzeTgXse?<[{j\(K9:| !8*$z .l:c7# pfw'Z\hZؘبZMZIvnM+Cj#K=n-.t ngU:WAUXqA+h^t-4Z_RYٜ@5J=O]*4ez D{TQX^BF%JLSA:,:[!pDM Q6JKäRhBCalȹL*&&ƹ#ۂ-a=Uʛ~u "AG|lѵ|{"ۿbGӷ^ƶje'-/n 9)*ĕH%x6x*$+` ;^d HT_6nH5+"6shsD6DUZ"! ' T%bvwhL8 4X͐D#R-{CzqC(ގqIds21/xM8VTK ra%C=\4MS$RYϗ6S#<(eb9EU8=c3ɋy9AR !:ze z߈oU1K/L.?ltSjAm`PT-wXjQ89'-> 0 B6ewDykZi#22VuZK㻛Xx7242A~! 6?ه}]ч1v[==|%\6P]۾7WżPu]bא\ EWaSatc>~Jvk^}[56reVd3s1-5t(w :\PК.kB˯'w+J {6aJʄ8%xq3 (¨u 3"x*P/lOQkFGTs%F< GW@͞gaem61Dfj{i2MQwJ-c 62h G},!HzfJ-R 22JhͦE 2pV%B'ą3'frr5c2(Шra n g}v<gnczlުC;*~噔1-Ww7?¦_BʑjC2,qg7s,.=$[)pS#iF/f'SiUHV VȾ v9?_%wE?F~7ԇ 7~__FW%ZdyEa/3/rvX.Y[oꗛE6j^'YKC$f\bh}h独xsl?S ]/֏e>|;% R7sXU_a?s| PZZ>UIĬf žV{#$YܙqqghGkzA'²5 A?*$N(ѯ]Mn*}cߦ})[=1E?b7NUsmt{o|nyewFܰ_uHmVӖcp~vO(vu!rGr W]=~{yW7f֧9N'ov/~~;?m+gݾE3kW?띈{Gx4y߻ǿtf߯y͝8rމvĭø`fi;Np)8Nw..}FxY/vjahl&p4X,yy;c9m<ǡaqT@BVgҵcK#Z#dêk_׎"ݰ߷sD" >PeIW8-hȸ,%Cw6U*fsBg7S=(zهl39:sg^$ !Wvܺ :(:?LH'Ted,6e,hj.W>F(6GOUPITA*b29YsߺjZelh1OC$:VKATNgy3.~*^F_wʗ7חD;k(%P ;ZU\\uxc:d|bRY!Ȕ"ŽH̩&AXgHsSJ DJX-9YOdw~>UF|.*}N4A@*/)a M+KB Du8~M+J(gFZ2 3K̀Y򯥀Z1$ZcSE_bF~w].3h^͠y;DہV+NԼq ,D h!ʪ$W:_Cɴ̲̳,NY޷K[üa)!Չ Tx|ӧ/ ijm,e &:Ą8(7P4JX%Xʽܩ{mSjr&`36 ZfT-R?Nsj@}NqNl%az!Rr²!KGsu>n^m/)u7s=qFG'BGXkfp73Qz++hV>XV$棑Gkٳ_A\Ҁ +dMV&Jt9?x,Hş_-ͧicu9'a[6eYUjN9Y J5ZvA:7?k+Į8ΞwkAͨ3N|w#vDjTBS$,T8L:cn`DūdY9b2` );h#"}YUJusC/teld!׺Uvc|o]hнgtaR/<)DG794:st{m3٪r0^r?Tz8VwrW0g(ly}Y:5GKêu@G59*S\-.hdr*Ѐ2 _Cpc]iî-"PE,ٺQy ."ZTQi`dz" qfkң9D+][k`f{xTd׏a+ N빣 ^[OBW㤐AǢ5 XL4CdAXt%H p%/$ںքTzWv3gw>IfKSɈЌh:gD3g9#nxY mϹkrqxebN)tE`uI`t͈R>]q`u`Ӗ 1e%(-جMMACnn3b7sbX&ɋa\S%E缸-* aK@ծd5i-po}juׂ~΋ȋSɇݼ|ع=yv?iCm<{5jO K("=SU?: Qofb32GYjssE F+b)n5r$ߟ> Z 3 W T]):g<;N|TT$ʳ]8ʌuȩ)y86*ՌY 'oP5[PBEkcXu"1$p }j:Ψ|9:/Oiȁio(D;:*&j(kb!߾)mNK@LODC^ﻡm [\_7 "pct# CSgQȂxa= dK 74LECap5TI^*S)&EU9 sNcE Y e1 dTj9D F$0%-Yy)U;ng_&hA7vL(aD[/V[9}W^>(yR[DV`LP#/dV]8촂;bǣ~.,[M),y~8||}32}))c2Rx]O]uƗk8Fq EہlU\J Ek5X8wT>&眙q>N1eWηb+cBi;ĒɅ0b/D+C}A (%(aBŸd#zbXTܻ69{Rl:3Vޫw*%HP m(x.$$Xpi!s?{WFd a!`` ,f˸FeQn *:HTIݶd#^FdEEPWe!d8ii !',u!|RiBH"ƑrW^d^zUӱ.ʫ녥Mb6(ĨȣFQYϙ&4e.(0f\QPYqد[<д;@e #MM IVK5FZnHC%g9xUJjC{/E./ vTߎil-yXӏܰݖ`et>SI?7p'A<Ђq PL"JNX1=#< #\/zbxb rJcX I"kT JB$aN@wk-o dw3|g'iJɻE5彇6$gK57Sn@E}9>3IRRŅ QƉUD #R@)퉧羧3Ϗ<`P`R0G B"aJ(owJ=4&,QZg7t8^9:/ k`(otHyVko #A xjQ^VߴF賰^W-6C& j'inh!} O\We 2iG(#2hlfJb+006c8A7}trCBV*#g[I\x͝H<'SɍARI;Lq p >'rMi-8W`cGC,@ P~t)4N.?ү?$l6u'|&oORq|<_A ww,=;0O|{idtqa|K9H*ED}Y4 6f_ bP{1;0mR4Q rUM֐oVwRMd &Dׇ pfY|1Mjy5P"mˎwM è讑#H~IujWJ*m,w̼5<(+i鼘L yD/Zʆ V|Frz^nܬo˕m\z{u}C `-[Br #6ySMϛ>R..17XNzh07kí0i#XKEo(ݭ(1ֿԂ 4' ! 9 |mjTNk fǂ iRiUK"Jp;EzfuFoB VJvP* QoBI%3+,:CW .]VJ՟8tE8I,[p?D /KWUej7tE{wqzu03tppW jNW k+"9#+j49˯'`jf}c(̀ߤ^N'c-XXrWu !%PbҒL.UCW8&2iǡ+g0Z]3 np kW;P9tzw1O6JW {!WeO >JzztE0ƴKUqg 4źtPR++CtE1GHt\BW -m=]%7_#]1,+;DW .]5CW`I s;CW ]%WHW)RVz2v XJk+?uf1eJhj;]%kzzEt%Xsm c9$HQI%"_ۡJ!ۗ=fo1BT5ά.J,SDXzн6KgxUҩEc↹{Uˡ_eFAECbW0'XEErUIv)/UK̻Q]f0GGd=mJ7<>N>T%vޏA3Qͽ/w΃\=36PNfsǗ2nKz]ʋM|]e>%G洿$rRyk0e\$=kM )1YfQ̔0>8-W<2UEM)O9DŽRn&x92ZEw9lEbD hl:|cpO) ]\cG_ֽU~^]xH&v&IYU\T=5=uDU]39û؞uN~uߖם}`ɻ)¸sbJc/Z|P?Nj, 9.salybr'R?e~>È=)[~J(!69hC2Ajf`KU#gTZqK7~F,B0#%be}0z)-a$EV 1ܯ~f if+׫6)rvlq۹Tۮ.k]8_yU[禴ۃ{%K! ">^)vDUP縒Y-QLjyғf H1F1ZleF͝QFGR*:d/wY?&y`A:+_z`Sdr][/dYRCIg#l!N %^Ӄbm z\(Hk⠋DEf 8AI^sVٮ.|E[A! t )JF1.0L&pe(`I% ǀ5e&#QHYu2Lw uX$"﵌FMͭ0pl& z6 3^[X4SQe9f1wn%| Ηbw ˕kn|'|]n2:|̆jikb2¦v2o쨜y#Qڜ3[yފ̀F5yq&)tD)X)"H`bGk l/;)ʞNCb" XSe OYѩj!OSVNSOz¹Wڏ^ jAyN*lc@AW=(PVF7>&U40/.fwJK#Xk&;~BRVGQygguFL՝u=t2tV26t<~g;֓rwz .W<\$^)p6ZחׄeZ9ѝ'tԤSMV?+6PڹvQ< 縼eǨ>AY@͢^ovjyU룇VszH'ϛ[nC<x?{׶Fc%?P^@ 3/3<m-'ev,[);ꪊĪq|_q Uܫ]wSy}ח>Zf{k8%p0"p\Yuf )0=W|{v)X^ıh#X22{dS-8bҲcb@ xj\%/Q8B![7.3S(fم<h s$ gxf-}8?= o9>]%dfa*MTqB搀y ,%ͯP|wX=ZI1gMQI6E&29#B2̥8%DR2CEiRkob>N618@U$?j9Ore{|e`g(o/JN-˺FޖE9[xxĕO|.%'`TG5aL#>)?iFaCt#./>NAIJ檛r =PXp{Jfxv\R>)bhS& +z_ݚu z X>?%T/lK yn\cLV5V'նxfbZC4Mei&4|M=O'QZlEk6[c򟲺54*ϋ,"CvJj.x` j%\8ޠM} 7K"]g] I gʕ [r5aݍ,H5ù"pA5Ir*L9O|ȫPs 6 Ԥ&̼I ήI ̨I ,Mj4 Nej:؀}>$H@4QJ7lۤƨjҿ>Bl{"gcsUV_[IÖ46tNUyHU/'MW%VE gTT~MYM Q}Mj$R2Qf1)E1`Ú}sꢹ4wEދ `Ȑ0F-LmٴbrUMgd伋Xj4d\֔j\ u c e?Xφ-gG=[Տ@ւz1AsT}ڊ`)"!h'b *g Z_Ī\x^T` |[忲W "()ԜP, &r쭱+h5 jӅ'-<L\kO y ,lMX|KQ\JUG<tVn1C Ǫ^~B}s ;}Y̎Æz4ujݷ#m4#PMiUJ,-"9b-k4y+%#9p%*EyDtm_.WsQa"w=qG=dF} osvY^-  GEzS0v3y1LBOt~YzɴfZ~;5( KɑMrN1\-^{\' <}OÀ̔*` ;U mbāpǩR-{tJ*Xog[z S*T k5Hhxl, Uu5E8 B66E7|{ǥ{~ߠb{+CtLy[Owϥ.a_Hvz3{_ F}Dr-_N cN`MX>|m]ZaҪqni>'z/e)Y 76\]-8$]/k 7HOa9fEmJuĜ-@Ui1eO9py#Öc!&{F{ZZDSp)(@+XjvVv%ed%cǖ,ei%./ Fj iJOR[**Ir>xDpJӣCźz=68093!ܾqˋ#[jZセ찫!Ȁ Y Z\g@Vg h ˀֽ_= kJ)VRhӧ4 =]J[IhҖgygu3zgGb$cO3PueINJ `Q(;e3VbCbͱYv>e  #>*ѷ|r:xŜOow ڄ8f(o'/|d'wUm#ѻxuh xĠ&I}3&jm)9&wPpf42eɖT&cXKS`b*u4&+n=\.~cѾۆsX2*`lUV)Ed&Dg1yes#< #X^୊cLQUDsBV'hp D0 P&s:TUQI" ,Qk$'gJ:rTg kZsG;0$7[Q4pv<)SP0@vE90O+̃ %( 83s^MPީ1&dF*0ITf0Py:A[NtF+aZlٻm( j'mnowْ7nN{CieJ1*r*Y1\p4"f63zEѭϑ$ɃxÔMx)rB)q xAP+U2KM1:):sdK:zg)ILsEvYKb/a]Xa.|]q69?n0^Vk9boN:z6\0xbD1XŸՇ4Z;}[V,nj0u6MG5snow{77=MVϫja8|dy,MO`ckϠ=oTm?xy/& $"j@W#/'u-7mybW_]]ͫ?ӱDwk?}|ZSO>ӳZn}sz녒w+ǦfZYRKz9ϫ)_lrs.&.ygڥ:N4.' ?W P_o ;*^0+Wӫ4(n)blbin6=MpAa's h{o<8`8E*)p]\}7T/[n6WrUq*W v:ﵦkޣ#F{bH5'aNVgq%YrqN)*q3c/,B)0y4T ;8Q׷Q78sLU@Jߋ<yHJ59IjqtA[*Og6T~n)H3 C5X ҴKv 1[A+;7mpXWALf?OcK喳䦖cļ.~vj\8WV z giQjz7ŕ˹ &I^23"p,1CzqyqI}uT_d-m[ WH7FAsBIu'W`Kz)&7-͖Kkc҄2%E%on+ C!4kg'Y#(}ӡtq3y|Qn^m >2&.xjJ>oѼ~wыz:yN܇hڼzcXƒ~%V[o4xp{OPN< s#9猳 LdH3`_4BX)1CXp :M Z 0>gSgC)'KG+ASꬩjfP`^X$JmD&.Sc`؁>eJPy[tc.> ,V2%rZ6`‰lyFSA"ÙYq}?v:k9DE%(d$@T+۽aa h$tZ H\mc3A:$L">&Hm6X!F!l_RhN0Ld`x/2׃0}kaO9Az0u| ւU0=c3'/" S "!:ze"+vxZ!]kΞw(8vShqXz벊SكK^fRr})Uu1s2` Inκ{$׹K+!Ɓ}ݺmtV@; RZ9n % 'G3I c! ^Ey㷪jS뛴,떂OXqOE*ir8Ab*(bAB`kж=Ҽ=1o2|bb׆VꆉҰO'M)iZs %E&.]TbE-X1c8XėyV z1*}U {x`a3o_`yz\axjhW+!Ci,h 2E8 ÷TQ~APΪy>c<:ԓF Պ \ S[u)w6E]&5Wf9Z-`VaMDO+>ZŊ\:juno&x>rY{hկ㬝0u(b>{?=}"s0LTRbK/-3NAz^Uzr:9d6jI XZ-EaN&˄q`%Oz2KrKJuKQ-9`ڱ=+Io lZJ:(Еc׫c" np͞wB]Rt A@Wjv=UFZ#•/th5:]!J3!F]`+xo JBwe'tMkA]!`M{CWdr+agGteAD_ juBS@WCWHd `ԎpEo֮]+DcJM ])$zCW׮ƩVa#&FT3BTAǠm'hv1G>q~9 >O8,耲$O;_Ԁ2;xq?ċ.N$l,9ܤ6էU'ެ Zn7 ʍ̃p8z]`B ! ZA]`Mo >vӇԃvЕccb5( iRg -SPyJtu߮0U$7tp ]!ZNWr@WCW0 =BF8dA.wCPnDpJA[+AݻNpپ섖 tut%)!0+kT_ 2ʻNWk!&tGtM ZRMׇ+M1\ }+D;]!J;,$]ixsKƌIyt6n]ϢxDWf6IR2| g/Yxv0YxvE =K'[‡TKIsw.2~ur"Eo 9{c!ZyO@ys0ǐ ##BZt]}7tepF;e7{׮v3Zq] tu߮\^67tp Q}+D/P =`ڢ+i_ *uBF tut%`7tp%}+@:]J@WHWJ)}a N ٴ R} ZfKhL5\՗|r.JbºK3-w2H2ٻF+WaXSCI z?eW %$5&Eq4jzJwܶ /{Ks MEnF:#C6zˬvȱlW9zwv^m}g_?ܔܾ~=熘~c xMR2Q~T_=4v7-skv)OBfvzg^ S9NYKzE3(޺AѲ~vqQq:Pg`cc>sB8 #ñɖ gO9K>AZ.*ܝ'ܣqppȏGF۞f^N3shAoW7JTp^O!X.rRYZ/d yWVߨx*7lM 0fG>k/~9o^"gEq0tp>bůibˌЕ:Cz~[#pe8bo#Jn" '둌Jp,ce.O7ow so ^!]i-?$uƊ+⍮^]}s퓫6R>f{'_f|p rH''[QPQB m> h-iЮmtW*A5dR-*j]rM Ƣaen±BՄ.>HN`Zlh73u x0+GQ6fa)-%k! Tʈr4TB@A`TP#sXl[lnh+U!RcQ̨l`&T(qCYBL* ISNPN $_ ,㬊PLzJ V dWV"w\Š6CA 5a ȸBMACXz @4rHBB("2m ،t4^ 2·J[SR 9 q&̄!VK1WɦsN ufM0yB(ZAA[*M:Y6%d0gB@Qā.(ʨq` ԲFxwDᡔJq'BGk $kǂ@K^/B!KkXhՌP,oØcJ2V']tKZn|=|M v'>poy9v)70wA~e\׼r/;Z?1b\*:4ҐzgLxO5M-#v0e|^*9^:O~Os{L lHF2Ч;L/%8>rlYj4j_}/w6=::k3Cv_vA˛ɯӋs͵^L6SEi*3]]~ϕ>{JqM+yYQMiSh>pŔFşoa+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,puP*soTӆ| LgcAMipŔd2pUv,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW \BWmA'pŸWLFBn C,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW:f ={8=;\1.aB/ȭ{c*dLd+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pe+ \YW,pUW )&2kr蕫ޤ 6|r7LRڠc'n> 750%h>ϖC>쳢ȍsMqV.IOUpiru] %zuk@]1.yEWBbJu#WJΠ50-z׺28uz?oNI51k OU\g&7ÊWxB8q,3&qɏ뫛C|7/p;Og%O.%O5:ݦOG ^r/?\.N~nnd?|˓O/4FWۯn}*]3#uo~0&Y?^6|_'W!ӻ~0OW,79Տ'qDtK玺YQy0r-v(~u'2\Bu8tdj1|XF+#}!"G?XK\y)0K AQGHΫ nTӑZw] -ab{m ZW LNg-c7]וP&giJtV@ד] m 3+W+EW(su%:cԕ͓"]10y=A jt%+7]GW. ty3ȸ] mȭJ(jʲ_Ll8@G;NCY6] .!DM`:].jѴж `8FB.;Wû|-b]Qy_=N S@%-tZ̦."CZOwQ)2)z~’,js4B.o)Dulp/]̫/SmJ} ޝ[!I(cѲ V]lUV&Xշ=C{r 샞ISЖ'M %,1S+/!^]1m&h]WL>榫Ct{=:@ÏWW=4?U mCM -Sk^]1NxJ0]QW/銁)f5܂Zt}hp] %uBp!)ҕ]1IוPZgpS ]1nj:L(+,љFΣ jt%AMgPho]WBYt5B]% t%Y@6CA`2&A8:=AMQZS&(u}KM!;?AN!9Y_u ٠Ll ޠe YiD﨨i70n586'@2eNnc" Sp3RpnI%8{E*YŜ󵫀sz:\}mFe(oKWBWtԢVNEwUGpEg$ t!5ܤuŴu]1ett5B]JԺ(Ӻ\$-ڔZוPb0;HW \R+5XZוP QW1=]Ōϱ_ҕEWB J(Fk`"Cpfqzv([EW9Su۠O<:-1` 1+"M3p=\TӪ[״Piz.?ϫž pom5q?K]*j/[j]J9[Jvh1)є'g\Zi4?Ǔ)q1hP 4 LJpޠ66?N.4ߎpϢSG۳=*` :8h@:tBSSt+ҕ?+%Ң+[וPB2]PW@諰~U] pB5ܜJh񠊲Yt&] 0E5b0-u[וPh MFWEWBo] %YgpJ.i ] nZt%ѵ+lmbճ*;BU+TԺ7i] mi^WBiq fK0$].96h=n-jS[ܠe YR|>rJKQuc*ĝ *swȷSe0A 9&],O)Rb:䀐M?/!^[JZRC*TiJe()#\}uO y{Vxʏ@8= E7Nu屢<m7o{qi揷o1JJw RFp9 jy.XS\ݳ.nSC*8f},~é{1\FZd~j{QZ;i5 "wßPS)RtJŮ]p_|M"ޓSQ["$E)dYƨMO%Ss£wvgvu:; L흻CdB#5*T_#x^?Xc9aC!U5)B5coꉋISp@MR!͠f@hZm`C+ѩѕঢEWBK͏X2eqt5B]"] pң .iѕЖu%LW#@HW 7-bZpu] %jg8+ƘJp+CmSE\0]PW)`HWLY^jJuC.Qjt%Q4ֺ ` /Eg|:IŴj[vU vՅѳ]+]]>.rl!4;Wle `{dwiify]B-:y)0K>ޖ39oMkկS݊]qynilaQpΖaR~s ,#쀪I.åro?l& _O'O'%u.6?߯tꌏ?jnnS-o{𴛚sM"+Co9Ef".C1/i^M;\b#,O'F>OUo?' o8Mc#8= "0;|`rIE60AfxѕB#oHWiϢO.LU<6 x%6JJJ] .]1mu] %u1gt%O/^\bu5F]tTPBJ-ڡQ6t,DdE\jtŸi6dJbb@t!85jJ(]QW)'NL9i} )u%DMcW }V+}օҕǮ2g(u%[G45ϳNC(̚\7^Z8)w w˛n7uŇՃ+]Sei.i͐"te6]u,҂8KIvvW3HhCi Ѻ9cl7P!f pKp(br4]ggx(6 8pA&@Q&5\lzj L|:Jhsl]W]j*Y9TW jt%C[WLF+t*ҕFW] -ue4]PWхIЫѕF5ڌJ(K1]PWIR8g5b -6xj<ʎ "]1pFWԴjgJlm"ytU0VFln6WK<<R sveg5#ZAbz5NuSo/lk 5T,MJ9Y)hO|PԑLj:r;U;rB:r#ȑk3%E$EZt%CqYGtY\:4$:ܡ~ueU% ]S^ 銿gh|-] W+غ2j,Hx=+ jt%9+$LW#U3(DjtŸJוІ[WBt5B]a"]1sIZܢuŴUUG t5B]ŘE4UJh 2ZjJ޵5q#ˉ9#~QVTeRUb̋‹b{ki`HiDR6!b˼`u7@h?sk2r2*us%F8nFs%8v XQ2>w'P?j*Wi}QK9q$-kɞp?r,:QhLҢn.JK 7rU0aPQϼ D ZhC2W@09\Lc՝7W@%\s'Hˋ`&?r~^sE-?t2cʏJ1s=̕cɥ\`̕!g̕Vv\*֒7+"`!+C+ W"2>W5 Es U+S+Cs~GseyQ)+f!+C \r \1MsQIX4W\q+ P222Ԋ]*e4W\Io''M~nB|Oo63l~F>Y>ZAr[n> |Og ϵYV"qAa@5tZ%UbYYa4.@rX/s^5{V8+AQfVɊmÈB e.GY^]%W9kE:i${j?w Al:ғ3D{V\=b%N.x<Ɖ~ג t\|*r6>t_~0MGK0+i_PO<$s~ \tofVbPqLqP³4OX.R͹N3Edk7};K''t€iJqalZoa8 &?2zx| ځ7~EQ|DyMi)`9I?iMCz=/tqcyMJu\P{1k݀b@IJx/ԁn0S@EsPF;<.I0I%Հ}7_dE4GICTtfF Gk61"wpn\6u#B^C ]}Al]i+%dmRzbRjnS |u۪Qڂ-tIرڱӏ~~BDG|i$V|on#>U!^%\ \$o%(X݉)B 4KXEӢJ:0&[ft^MKp쬏AG&a6h>+]׃F3;L< >x} R(;f&w?x74{da\S_߷k~_s.Rxq2$+&.0F+Vy #ٹzg[ rq~#u˷ЁY# :%>ò2/V8AhzuևwqbzڟwL 'wy91FE 'SF'X(I߳iQn w?8oܟaGF 1&$r3n_b/t4&nyqڔr0[E~ 7ʍ w{wŇ#l8ʦ@5AlvgI-z1lYq7:KF+y^(cWBt|Lnw1XJ_ŔjƊ߄T΍t w_9vؙcߛ9Ac=xtXR`vZ@BV 骋Co] 9KYa4nE0^DGwdMeKVJ˞Djsmod?~JA)e2Cr6.5mS 0! q[M˟a=mV?$k o5\\[|WUD5_iUY0_=nζ2~ٜ%_ė_K wRBU HRY !(+2I؛-{8`MW-V8F|ē1::[f"HeAqlPv>c3Ht;V?0ewNo1Ch([|`pX2Y3&Q{ fAV2CNXIݱtgu~_崋>cLcPW*Ú/bg /9;!|&JuzGb5v)ӱʄp?npYˏ)h]#^i df8C5_$^U&" ƝBXWbhXm kYj(Y+x]'9vbL"XrSHԿ gmZsp-+LQ 5 Mls~ oݤ޴36 :lwYi %y#^*"3|#K5Ĵ;||iw6JPanm5 ~{Vl[ɲ΁&%(SvQP$RIË)(\TBD 3ɴ$ĠӢ6n=wi3c$e3C\ܥ^(ΊT*+ SSZSq,ʬdyDo\EuvbjgjӼ vl|s߸Xqf4w6_Dh}ևwqbzڟwF|6ߙ&d7nVK%~b}B$ qE~u1%R FATS_.@ҭ[Ooٿ3T`<t[xP$Kzel+g2UD_R!OӇ[/ad=,E;cK5,vcwkKBX[ ;}|xc|޿2HOd[~ +]P\~X%m<ƺ{>cۆJC}Vz7\߳ ?f cیjG,t^ȪXnfh僲F(֛8V~qm{fyڎn5LŸn{p Jiσs,#noJuV6ʌ~DAJEsU'T#:778rNma4eV?fd()ر@ ~uTbx@>xdR[EX젃ϱY崋c>9 fbL igܱXlŌA.^"+% %^%|0!bdXĊm Rm-"O'PDjN=g?Fj/0Q:ڛ"ˊy.[B494.k%H *C%"WJ/k лOcO}.װsW #'BWu2bI+ll4ۓAJLJ0U3j㒈}h9_XXW/\TZjcގuu-8k淭ac._Ӟ~:噚)Mpe T, 'g:F#4 YFu)Q\53vx @G4"/V"΃a_.#Zܨd8I'xF)#ዙ񢊃;m DBιN.s%9ⴆ9?eGcasY6. s{= @^R=_@# G*mdtY {1_$D9Jyq= w1?H2AJ|:a0 %d %ïvR9g)znSQ\w2\"Zm*tI^S2gZ3Nkȼ<6IBR$5qpwMGB7JwNn{MN#0g mj&,q\vdTiskνX3]W9q9 xpI|X ;N3^Nrtd{q%1ÝNjyyX,";DZȦU$sb*־jWF/)}Vo$JP꺣/򴸗9e@'n>M${Oi[jkj2Nc"g;~&u=ZT[6]n?%'.h xmH;r;ueLo?[em, 3b1F}A. Gn޳rOj5{m_5-`\(YV{ă0qrlE: mu>6̫%.GKhsVbBH(+͖!al'b3{֢b@Y4rDG1x KХ G>\T9ZivmfRfƱ*2xG:.Bx[4G9mdjy-icYyZDǍnpЂR)yxHW<{`hÄb׹VO|<ȇw7oeF"夸G?&L5+/wUrㆱd2"TynO]cъL$Ȗ%rP&i%gxl),$;: 1-ypa sD4m9Vow? *3Re7(/P4 _޽91Z:2HN&8L,˦Y3B-9{"̈ݡVTSl2%JL߶35Y~ R4rP}m.[g]hhC&$3t#BY~85%uM5'vU֛$L*/oZv{)"ۣ[^Xd^?ŰjZ0``sclF*#gȜ@RP5Fa>g@a$#P9KfU<sI0bPhq(S|7d:K6-8]15\%ݒs,t,uөE}蓇꯿8rZ)sP!*+{FdA%ݦPS{ AZN"o'"]J6 [5_˗2(&W]Rx-+#s'X B45i9㨧E=)Hm M ixt\ 7\A/ 2w7rPJ^CI3ρa[w`gA&ʲӹqW8"AgBMc6LPv>NƏ(@"Q9(!0p9-y|CZe(@] wҋ ;e-\M&5@H]۠wHҢ<(bK%6/(Ht< b@RFeڈL)7A^Vrne7_+e edxf`}SpzLh^3 V<ߏ]A hVpXV$j4%DԩMRA׷rYY1`,8( mYƤT4y/9?#@ t:bek|B= ϘJ.՜HDPsb(H-rAFv& 2jymmplCոʿ̌q50Φ0I`8JQ6{6 .crR7(kay'F$Nw;_r6+m H)_EzIZ;nZĚuD'_Qk{C" 90PkX2nTY),ye>>HƖwݝ Z ~U(-x ]{ʚyQdџGWƴjzUNg2cbف-B"p\ ep0[?oS=88EksVhy>ʹ\A&s뜪w(DX=.cP1l0 ) :ݙABk}\Ù;RCU8ЋiR8JIE9BlHc/Tې- wmFB;"HN&(TD^<9ic\Y+0Ҫ"#oP@Tِ7:4`6؜c>L1,/%y Drh̖Kgruh"kyhPoŌr!_vls;1~OHdlf>@`DH֙&qi(&xH"s%b(SԔd$ &{@ᚥ,FNs3DX1ƀSrr]V^>Y1d0:-Zx础@I9*wFu^|}7pR\if3xݦVYWSPF85vGSl}L0PV6ml+P "=F"h0aьXBO1p9uՕ2ZD~LKDΑD~IwKp#ף-e~Hal}B2&tw)o@+6 Ȣ)OBXIiL,9g(cO1J~OW2נ(a >$D0$UYgUSK ͙H6:RȘ~Y W@h~ؙJt ZX?Q,~ۚF*|)~Vؙx@_ (ׇAS{WC"ˡaDc+\zA-h&%4m*H:i:x%H B`$ߚG9wf9#$#[ Y:3Ì/40sB&)3QTDn.6\A9&{ {Mm^TB5Õ0c+\s0Wzb9X~O@kn4ř5^۬zp"q$  qs:Q֧] cF>N| $ӱ `PRJ67[!]r[ A2No#H"Hs?Pj ǂB<:-3n䞗JU ob`r2HiOBe3aP%o4&3Ɩ4 ?ȁ ;Ac 'HMJNy::ĖPB\h&$(#`uaZv1t0y~SԤpO(u|Pj ɜ .oL!ߝ~1qcp:'s/q,wڀZyfUES&G0g|_8CkmV`@a`QI( {qÇ,{cPHldגCr83g(bn b~yZ#qpt_ ܲ~v 0x%.jd3(8FEa ̣=4^]ͥ, "i~T]6c X( a!s]`C㤑 5l퍜}5D%8 Zy>q;ZڨBV8rܜ+ /y.j JZEON<bY4` HnZR:J ū0 +BѮu'tlTcRskecr%mt}8!ZT= 0ykAN~jR I84yjz[crp7NG /?,f$:h4)Iأ9 '`f+liD{xJmi?|Ue<E;/} vI{>G.{{OWӉ Lv{]<&`.6>]^`c"Q{`G?ͻY^AA4a⨀cv5VwN<5XH*l/% 6-r8| ɇi,H.|Rl2R:cPHǂ"5EX?e^]O0BJ!QF#96g_!q*9EĆoRo0)0 ,S}++8PԨw6&Ѽ=u>\IH/ZAE*6DyĹa O81R+*\0R(g7qE3S0hn=Ch{/UΨsXY 2BDĕH[ a K%狛w띓&*# TؕCv5nZ94|q(P&KQv|Z{^ dJqVMj^M{.8qQ}{".-i2蜤<_muv`h1GTO.a~\^\܋{qr/.ttb4؉D`: Fg0ᙊVkC2Vh}RΔ)v.'Eӻj>|t LyA, |ҐSiـcGeaNɿsP_G`? $U4fac:#P ̓Qʹm Jgzno֌v`\#{9Z k\GA-ɣ/fa$l㨜ލcu naZ|.h(:. ײATpe FJ;cUEJH )A XʀCFMLvE)I RNOE=B%33)M 5M|5alt9Yr]Ό5&]Cc:"ay>\o f*aCӞdžtk$v&Z6 0Nѓ;TX^\GGkwJ+t3S0,=UJ.UP*.,)VFVHW(La/Y`՘R SAiSB@iSq% V}g~J7ޕ>EDUz+ wceczh9 f4] 2-ҘL*z_`΂C{/J Vڭ %*ܛ_s6êFNOvXcNJ-+T3vUl55kRqESD1ݎūo`켟/*(ìW{]T; Ԙ9I»|=wƋٷޙOHS+y%`FhW/^+1pz̸ܳ?QpWi)WWϧ_4,}0{ѐs8BEEK3[;9QG?C:~p3y9ՀkW 1wNoC] }(>Rn Px/yUZמUN/#̡stv쇳qA2o~oIZB 0e8:&Jg;^(=H*4MvT 7U7a+5plͽy8I<%*b 60A HSPƉ"Δ@X@G i: e8 q2xE ]1 E<$O QMN7[s{Kb`&l d J 6Xz6B -־ަfm`T:2)i¬)JtI:DŽ1hA7 YdE gx"t6ûcWDiLe@BrKAb1cžYclAcjcA׳Ϗg`(ER"wy?xC:+cs./s;)og4uݥj} ÷|j7ۻhGi/?JV1H?P>\U~dǃ ~_̥ N"%FToiTڲ1EwO`bF=>k-drЄ*dDǥfN*yh%6Э%8yOIg,E2-ok@~|k)Iǽӗ-pƗgCLìY4Kq/xh2]AOuZYlևEkj?0PB頹+p1DzC #W!PpM}3'K[߯lniN᨜-*Co\ᎃG8"ai0y'g`m R!DQhp>VQpީ&H뵡vN'v TҦbmiD_H7+-22!IdH%K%uu(ƀ~ź cRJ;But94zIʭl @Ns0 ZɭC`ƨ e41'5)lcbh$ ٕJ}zQ4HӊΈsg{Dfx0_o7R33ﶟٚC+ln9Ťj` 1Gxp4xx,DLe%aym3 ;pzvNd^n۴vJ`*F|ST7`-^-@Pi\ O U*"C 6B3Ttv7,WBj)>UsH^d0w͸'hNK] F`(g#I': g1 hn$Fp:}\gk ՌmG͵87.OV~wW vκr2x6}Y:)H{*~Z,5)CrҲ4=eI2}_h>-q9w68"۳E^2CcAhJ)u9WfьwhԲ]Fl'nɴf> (B1Gքt-6]1j%/.IsQNnnOI3Jf+wMx#J&j9IJ ǂk \8DpX,-1  %8HgB)@7zl!Mk#CAư'sc :x%iE;&,oxŵj֬ȡ!ь=GyqX#:s-{Vg,Ĺyet۝=zpe-2z(rQtZT Gp (ך3xBh_(Um嬨  LJoPlƥ>/kQ!M1aGD~ѓ.4ɊbǼڿsv=pb};QN;] htU2&kΠ?~Y(OFԄfDkƿv9YuY̽Ⱥ~i^]kΓ| WC>[Is|;>v:M3n0~2Y3' ufqRmEq&0GU kՠۜoIo!KxWXC$ѶMh-(}f6t3?ed_q Nzrv˧)(Ti?@mI4-}W/ڡZxi247pq1?yw߬ GP]X=}zN|b+qi 9b 2QztNPx.0llXN CvIʩXo!HJe$O)^bj|$򶴝:/hȉ>kWfꨧ`ZbC(f|{9lI4{\꘍ 5/Y$`0FRw el42>%!IAwY{_>֖޵\"sm ;O9$mٺYRl'E{JY59M1_Y$?ΚuoHFCϸBZy:;fq53ɴpՍ d4< 6YR.XA]Gå;);/j>mjw*~EtOZ5wsMh#X.:I]/Iqԭ˱\,8S` wAܗYtqs4<(e2n&pC(FWVef<4\)e&c+\??ԸaдĴʗOX)J$ r>*ƩHdIbbh8snD6v6؄Τ1iv*XOy}LMQLф@ᩀ`T͌p^Fӧ|΄p yZ`Cl_9&]vP>)1ZF"pRxfUWf813h.4>F Y<ߑ|[|4RÎҭpZhB==YsqAZavp*Þ<*$ڭh!ZLʮlt8ȫ!8kl1e,D Ni41C/mmMN|+ʯBGPf]nr.$29IC:>=oIl4 MoTiJgi1*ӏ4wXk$lG3FY,qMPj&24& z>1r9fԸk@O?y#jhZj0pࡢ=hHӒГxcbZ5{fLpzѓbT ҈YwEYRchN gzBG12ެύ"U,I$7ϓ>Q>.; ސS q,Hfi ;3oCGw:=/ _-\Ńl{%o$NI;N27Th,Wh2A&gIbHjLJF. Ex]x"?9A#8w]El=<`=Vȓ&&4'⽘Xd k@$!eXLLMJBQN=:yࣘk(/g؃}Y6k;PUP-GYz޽,T=zc[!e<9(̨ydyS%qs#c"79R[ X%x9ȏW9-sJ~-4Y"2eF=?bM i53>fhlE<˴eIak^RV8++1O}^i2&CXҼW xϸZo'm&TDpVpHؠTs&1+*_- MzZf"1 hbvS%6Uy㍓e:Fy+,[X(Q+7?|bHEHaYs,6D$=<kWEUP2-o=o8E9`~>nz-gx ^q@:[a+*38aT@}Ľا/?ɤ&H秼q gSWQr1-VBLL71]S"'n=!eNϞi;( i.YڌCݬhѢőFc4ENw#r>3)Ay`#D"&(&ܘ u?Y%p;9SH. S?`m9ZIze Sʆ7R+p%HTRx dq1"<` N"_\P` mL]8Z'Uxe Pm&t4*q w$m@o+а1xtAe[*7<DE;'\- ,_@NP[-FT7O9B2 oCziW\dEe1i9A3F\CY5%s ?ƌ4 gTVh:QlIlc^%,UO hE+Mca: PSX"j 7F +Txϕȸ9#isFәFඓg;?%;7T!.X|,i(1 laV8l#i)rz| 2rv7H.[ێ7Ö6:ՂQ=a2\MKPdu=,~2$-|GkmnRT+P;0)x4fqO03ɴw%2vd4<`fGGЫ&FzhQ#ZA[[F9u2MROK7[sٵ[~7MupІEs:?ۤjH΁bP-bśl |Ixב[[78es!j?MxqX}|dׂ˴9LWvj5(ѕ~9Y>mL 2IE"@:\ dJgF8!hlg.oyN: ;ĆbۡDصU-+pzUdc5RG1 I 3-dRͭHہ AB8Y*KT"coǗ'Vǹ}'>rU.]@Vm!~>KdtwĄ7xaG|h~TM0e9tł%(a ^(0DFP:W2ƹ ("47IvsN*+WɼxXfB;IKY+r!?rKB3N]Tlm/NZt<>?`*xx*q$% ?TȠ4zC f҂⾒HJ{)eswޠ7s +lTg揍ݯy w%e#G+Vu*Ek])q!6[o?@5`5I2C=I'k)@ A~Ofw8ӤI> HȸĀ]05h+sKjZw,.^ʦaR0vꕦiV~0LK-ҀIas:7.5A#2(P6 x֙PM҂"^hlֽ.S fLy; Ś$ysIr1o:{}M=D$ f/ gArUM]bwbAyg|?3pށ"o(TRՠ!M{ ,j!4>mƵr0bJw]58Ks]NS*"`_#xص $RT"E dۼɤ&H|3O]Ed%n< ѳ5]M9YۆmKdt򐓤M39"\ Wa5<-WeLj,QXbO<1$5 07^, }d<\=3D$A?yGdq\p%ﺊZ)tp +zE>gdt^sBΆizϜ_ J\9u YEr՘=}Ef`ڙyKKdtװt@aaՃ\2N>FxHW%jfHnPG.*q 8?\B+Im5!%It8B@`IsϕWJ?MI8 "_lBӿט H$fG?Y45\xta)xՆ#x"T"L;&`/ e06pPIt> şA}^}5uʿL/^Lq}X?up^SxWS0{~σd`cpXS:N}#4./m袆ӆbo^iL S?k1mV/A@agyLLѤTIZHgldQeXhzϦQcN9zCE] okޱ3gcOw(k@~ii:Xՙ_Km/^ kBo+067P\PT-0.hPW'oV;j}]L.L^~oE{*Ke {c{mojEKdvr#=SMJZŸ%SO@d4 6\ d%2[R@6^R7zxVʀ"L̇oIh%IMԙ:R$ J x bRŔphL1 4  #\`Ju9E 0K3˘Ҫhy'g1z~|ա_ʮ ńvpRqO$M3)(Ld41C/ߙ!jpFw"xçHw^ca P|J_qRjkI~iT ֧F̂3r(<YRc5A%2jp΀r\Doz* 7ϓ>Q^wRyX pU[dц.F&aۇEceȸˏ&\b*b1dq!eXLLMJd6"=T'R[+%O׉JzbkBa g7o/{_0ђZӱbu.q:k4׃P+~H3u(CDq:oC,[pñ6QF)x<TH"DU0ڬacC'B$I>wg9ˮ I - #p`g]0V nɆ*5>In[&J FpŮe)k;|ʄ\P2WNBjx; 7==RMŒ7gݧ?|-M+!qn)]_`S$Cnqqx PaT=$x ۯ~]. Œb'^ASnGu0K%_ˆXmZr;\ 6U^/ߕǐP'Φsu C6S{/eNG\UOVRAȌ6[| O틅;_`K0m=? \K Dqx:]r_vO_ݬ#ѸJ9P"!W0}ɑOB{O$ Iĥ(י8#s~iftWx>2N{|dF޽п5(ё][oF+_}ngfv`;Y``ocedٖ8?ղlӖ!MRm,UUU_m>=ju "eiBf5МDU%[M\7A~>EC^%vҒ:cZe}4ne]7h?l]9'j 4CJ4NPjߏ  ;:mPjR'XAgޔ94rj obÉ/MVA_^u P \>Ug'*Ʈ5MkVxፅT(=^[7oZIl E7VS7׸D{žuPL)&aM@շ%yr棅vB6=W&-hd@ǨuY+8͓֡5< w9rEftwe}v]j3s͟]2#-~>&\o~8,_QCl>1C{ͮloWKg?]t- #-m$l<ͳ]Ò)H~謅i9z]Bhȗ$$a]yk3_:6@>Y E? bP3{At2`C!`\C^m1DdN PlQJu| 2`=Qa;de0ʺ7td왏}L|D0hqNUr&UaʆHI≳h'(Y1r!t3Gwu8#+It|9hyZo}#<fhhX'LPmj{!$\HXM%!r<:J2Y>X!"[~~ dB i0uN䶰6Nd!@9uZ4FN1(=|j|R65hU;Qjm #PArye=17])+< բ .H(յ1Jwv^?5\?,J@󞣤42/Ut"$Sp4hKl~qWJ-:v:>)P[+D!L#`w(dgbf2H| sňU.b:=t4U%]5>K8:Ii;V*\Aw}n^h#@:VQ=\|:\u@EJh^AGG)X|wXݥ=W"{A ф|118%lO&'>d㳩 /xٻFLɸpaƀ42̏5y+5׊`D&5XXԳCy) / r;.k'Ge*B4^xke]rite͗/ηQ7 5,.Zwc.JumJ/5RT.P522!crcOB(q  uQǚӱ@ִ4z˱@b9[ P]lD!| f`|*b>ǰ9Lױ ?yHg6}={Kn͕BtʿNJF\M<9rA7Cj(Y^A"biڳ=NBy_]4 _Q U.@].%˂j Q*XTFAh}rb*Uk{$Ȃ,Qr#J*xEk}+b`2՗_OWTN`\#dYsìϣ6AX'K ٧W^i mrB5u2fmtVEMu`uo}q|\)͂\)$I3mB|C@Ӯ˪mz|VԫiWo$4}sNο4tDLe)bh'<с:K7" s.P}(#Hȭ5ƭCغJ k*)%[Gץ0ȳ+Ay۳kpD*Xb M [d z<괻UxƷWhuyNY+zԨSJߵboZ`04^0(LizWB7MT+Wx9eNJv*AW,yNSAP AX> R/*t~jU, urU95]A㍵O74 )8Y r+0*h8fg=VJg?60ڣ+hlqt(&tߝ4];4Y,iq1E?gtSw 諡Πۆ/l4cvQkddh6;kqֽд} Sՠ_@rFq6(ɫWAEB'6mI '{W4rI b57RΆo}Ca|[kMzj JBh 8ENUXB#h=VIP1H聨M&LѥG:St{[HIJ-v|(%}+U˵%UIMR Ȁl.6s<|KZWA6yz#7IY"=FZ=). hsSg:I:Ĩ1oœo!Ll\dPAu-h GNyNǶ gZGܞz\%ݭwJo7Z+P ҫA,NR5 q7AP4 C9ch}=!P9S}ϒSʌ//\O @Fi)7A)ڰ515ۓj r(pw8~Zӽ#FRV}AJ0 bO&l-t٭П[{0"mI-Ô "v r&k4D(Zi}S0Yp/yxۯEK*k(kb/|is#8P`?$@% Oy.eA% Iʂ-"k950ߒ,Mu63;yj“Yn9~fG0\v(X搋Ps s+ !uW(  QXY0%$d2'b0r%Z@.Ghޑ6,0Twvͷ{wm/rsOԩWèKYJL'1re_F\!j~K恛 f&.`Jz*9%j[v3 :Hll2ה< C+b ` T!8+RKFHIPi+9- tbm Z%b*{LE|p9y./~C`:{L#](b2RADBZṙE?&h#i{%-aO/ (JYW$vOFV1b|.1}O.Rw36OyOIrbEQ"CH=Ehq}ܪ}:^\P-pݲ)5~~O˷bUN%I)}CQ^o`5|=~} ōfdg돾زEoM)^̕˯㕦OHE \-qPm5v  u'l(\ $cw2`aNh'|Z˲ ~~1`ٍvUcFk[7(k"_f\`}/7TQj 7>nCd;sާMGqtn"pK׳ ͻ)g/.θ=;Kl\yĸNӺz*q(v a~Ny$hu9yI>uiu/,uꃭRw(jBC`|bq454V;Ny&>P[(" @/=.M.6Qo'.DT}Bƈ!.}i#K}m9o"1X\IߖޝK`m7_y5^D0JxX܀ ݶƇ 7&KIC2cto*vbťblFBg~9rb=S$h \1bv>g&=$3sXlGR^AXV"#YÄ֋>[}]aۋv]h׵v]h/mTK1bJ5֒:X*BEL)_Q>6qB”SB]@Xv^@lfhֵɛ A&TMlTO^`bPX(&1:HJ]\n +2>q4SDH9Uc!!%G: ћSG! 5Q4+Ap}SsWsZ 2peߞ╱sy{\ޓotC[R5}}Q>^_} ,WW.O޾hw9!]OGcmMKY^_&s(}]0ȾW螼Z}?xc},1o~g=ByZ69{ ,?1ϟbg<֌ <<,ʈESm'ɴcEQ'*\MqP їXF)M, XPW3gQ< "$C$:H}DKHM>r.G[AYG)n}!ZS-%߅)X8:yj7k{ODtz۫Y/:_HeJCm6oXݘ v:-)+;)9LO--w<(xXoEUs0"k\B@!i`r%;*:P-X9YU,DEsf\ 2Bm3T"4Qu<&&bj/E&8iPSQx2U'AӠtCS4};ҰA \ QhۄAtvA[o#' eQLAgE.ȶ|9Xa5CGIY);!nM@Wػ9hlӀ?+#1Z^PhH lb4w1N0M}3'B/N?-9b!kb!wݨȞy79T=GAgxo~W^}_5Š72{ڧ|{5rH,{^'i6g LRy򥬬lv>G;ˌSԔ/c᫧t=-ˌm 5G.3ލ9ˌ2]uhm]\}AK YHS<*Sk4Pnw;[pk>W7@UcAJM_& |kL(+@ڮJnNO0Eݽ{3bH^h..O'^:F$^܍9ӶȰWu%XreC=أ3je]jg <)\/v-jLhUu7 ?Wzos U2'Y6ZR?E0ސy/b#;$ූL\}&%bP6r &7 q XMP5HrtC'7&o^}R5KqVԺL1c{k:l/ 3C]k"(իt\rˋL)@X1Jh:d?G`ګ5Z>/o?7G/U](bHb/Oz\ԭJd=x'7`c 4P/*`u[ymOX#$Aӽ=!z>AJV,'r:n!m b۸(VfPCCuTRsGO>SQFS+q5AWQ)ya@ٶfvRS%WK9 9 (xdJ)oKaŢT*(d XБ(? g끄G J':;8骊;!Ja =Fx~[Rw*7һ4rÅn'>ZF1hLq"-"]zm~-wu_m; 3ZY(u"?hŠHGwں»Yd)PsN>nlͦ]PK*)Nnf,7E^H3)CXg*xS` 7Fn@MAl%BofvQWV'@Mt#pY!BĬ^e,r.Tlམ+mUۘ6%s [EiCӶQk|UnA܂V+vJN|}n-DzEfjd rIcs .PEx6K%EaX%ϩsXO B4a-@5N"!Y~X;X4aN|Ce3{ر&}Lt4 hKV䒦wuHkkFE"3}rgObtpQO7.i\EEUQq>0TQq'5/buI]mV?Rnȁd 6%Zr;1R[V!Wsfjwg2 f:7w)O3rPg?hhyr&g6 `ʵt$]v?<@#q7/I`mj i1$vI>>v[:$ؽ6kڜžx\sj_wGfaX u4"_~xsbxZ~G=jLJ3(|ik.8x@_aQ:RŇ`Qbt*ˣ!P4!&@yˊZ(q̷Dɍlm룱/:@A"Q-s.M_:?/ߝ~<=]OZNa>>Q~%vZUCneMVz!j8B[ryx1\ӏ=N>8`GgcQ>e,6ܭ$'.}~O5nݗiiyGa OG\vIv1A[2Cȗ[N3@bLm { .:ɞ|>G5[Fc2Iq'N|"svg2 !фt_ҕnj di i~֑ʐjOHwPE7۷i8/w\~&k"q2^1iI$4mw>\M/t2ԋQ vIQȇޫQDcF f]͗2$ntzyw(d/7u\Dfa0x|:E'9 vJH30M-}9`:r T(3 Q@[>P*-FEtZT r9UРC9H!5'`zF/ߝ;DZ,*o h f.e7,`3T8>DjʹLH6E45Z &7揱/c_;UH\$#8 Y !CR)>wBjz,ΜKTpً3Z&!eZ|01w}(њƼ0gg6Ef! 'ܑ`CT2xBTZAv#E[_j "|Z⎱/ߙ;vb3dfr'!uk=,ALŚ$/H9H\A,z4#ٿl n\4>me-Ӟ$TI]TQn*]$*Vx ȇW,aK/w,yv,Q}0bG,)i_kBZxغ3y65n-]Mm`ΒfwQXmi<_M&-](A$fǸ2b}oA410օpA;+vAՠ jbG(̤=TZԑ^ GP4[Vy|\TxyQ{ԐEؗ*VyϿjƗ]7gj,?Sc9nVKlq:$om1ﯢm}δͻO6p;'}~뛯[q>9;:I' .񮡉NE^ c#L_p*OO޽RjY{?ںeQe߀Nz׸ My`D5\{|~늧秸1Cf!6rQ8s(;vկ_܄yY3>t?µvyHgv?nWi7<׽nθҡ(y~i.GZU_akS(6-+'æV.35qz8&l¯A8_MuKPþ1i4,߂˵#ܮ!K ' w՛-.@,Ɵ6K}IE~#}t?b@kV9n2kKݥm|놅v}j/Ayg,sG:Ly rH܋QITAU%Bz.&1Y}ؕb."sA6Bd [N,e[jv H"ı4Ze8VYch H4s-Ybn+uFAt(e'QNdRaN16ɞ#N3r4V8g9Nkj*o Yr}&COi!';3cuOb<l_'\p0ȉsz)XeA 9Ѷv֌pE'{ 9m]c8gj1kmHڴ)*ԦBf Zl*s^2h)c /Kwz{C!d&} nGBH>g[26LeB4m=I ?%rdk-ۑ56ڞ˺64G0z]׶ ࣀgNlql`B⺮m]6oCx9dq rS:D!jQt.2',-$ b]xXA TQ:yԨΣ z6K<^7g[f@ۧҀV9rQeg\٤Yv$鬭!IUm`fS$Ct+=tq"G4lV㌼)s"/ t h>5S,06^诛nzwfҥX (mW$͚Ez! eb@=ti>R1tg`(~I؄1 Z[RqlsR{LKA5cptb\w)M;"0[@iZȡ9ϡdd*^C m) AUKk`ɨH.qN2+j::3ؾXLjd%>wx?9.p1#FVwZ } n^;Ǵvd=c 4VDP#-I(@1M&Lh-D*%(aF 3]lV^XkQ%H8^ fcWb`tc5:U젬Wm[@6ߦPFzj.dq"#.!Hv۵uA0S,9!AQyM-Na=xPuHуz)Zlexp~6b uNyWF` QboD.x ThrgPL+P)O!Y(-K%h V gA@4?j.kE lj~2f)\G ;5|԰Qv>sbgQl}b=="ec`W.~۟f[fFkj{EƩzYkJs.[5 b:ib 18u:c]m$ע[,~X4 hDZV6h hbOgF%/ted崥=ofv5 ߤwtqg7TҁDe%eѠCz*97yuH$3^HC*y&9Q4ETrZ/C;&z6Y1t﫯M[9jG!ƄAlV#טA / N`7M |c=ӂƚ"-Ь X8,DƼBaskc)|59B ͡?;H'-H^-'Xp q32Lv_u{O w4L xD6[ 鍎‹-zx\<52hݺFfwDD& !xn]#7geT^~` /hjA^kq=1 79ws v򘝟7"n}~>?w'Gѐ29g| U7l (MqxџjjlsSro 8fԯg[/_t{v}ZUr:U #gL\ `. *޸E ΣJVj7oav?8C36"BFn :]J>º\P;4unQډEE(WjҨvv?8C3 v<lļ`hR=}Җٻ$߿pkr`Ucš6!dU%{Әa[+G9 c."ҳ1YQt%ߚOI`)s!/ڗ@^{J> fw6>= nZgM- -,mJd kwg,>7ۋRP,X;V;a~Hlşr@ %Y.i3}zza~оyZpTPahfg/Mlu09OE4.l+jt =ި@}ֺ-%2ޚb%J]ف?q!oوv@JKЫuM,W;̂!+e[bjqlmTtܰu ÊUSgT0>$d Kb{#+fkq7]/cG7o~M&֯M|a\o/P@'Uj8c?_ mQ#d[:ݪ5gxo?mzVZ,D#_*Տ'6e)ǿ~/ZAZWX4zZife#NZrjĄc2(x!y4*xic!ȁJ͔{֧%x d#f=Ӗ7ׁTNY}]m[odcW]~BRlTV<ƺIbts|dlsbHѻ$P#:B[Ǿ%kRKcҩE5:ڿ>%yy@7[EX(ribŞMu)r&MIc4.DMO1ݱ=Ro>?{WǍOė*@>`-Mv۲eIы+ΛZ3=i"H`KⴚUŧ"UoZ?$s/X4$.^g%Eq@t[;J`$#Ij}RQ2֎:q(Ab,ER+P'{ʸ5dDPYΞ)zěL|}ig:iOF !TĘя1X(T  -z\ %(X4pk" ' ,?h0̶յS1btj%TA]95$:.;\9po~iFoe/;CeV\N{iGi旾#ȞgJіpAK/@2jDZi,'a}ԩؿZ8K -gI%ŸV3 D6J  &8ߌ,GaDeK[{W^.4؃~Rb/R<,U0@MZǀ'0=(U%=3uAedy&ֽRE&jϡ5lUx_Z l[-YE;o0 k\s_RLkл|k9Ŝi,F̷"`s93sVr8b=slyB 01/ 5h<*ySmY+,i{B+k\Z{!rrim$F$SbGP.Kkڂu  M]ʃPDzcNS8|1 u>x=Agcw7lr^nE[AYtkd~Zb;?KFa>\|,qw&ǞgZyncg xPBVAU[]“6ܝV(N]"SIc<Ƥ웣lЕ2ݏQ~va2::z z?_꜈z\a7"p#>?Cs֗iYݖrQKTv_H=:kGƗ/3[յ!w::7ICe@Q /[BLEK9##Pz<7ȅbڏosVvƱ6oO2cma /ԮPgI.^ uBœS-œdj׏od}vYZԮ'<;{1rPƲG60km8DTPkBMˬQ;Tm/70ܟHAV\߽?ο焈eTe!*`#769hJES5, Rm[ߦTZղb "Y\O[{'@ۇYl荮oR%8Drpܮ#[AoT I1j.朥ƥbȳU^ -8a_m%2"_T9 1S~X :Բޢ*oPU:S:g'QX1֊u譯AV14Cڥcj0*"ߒ:zjWCmuܔP{Ǎn0[B{u%>`f`Bܞt r/٪\J`d ?;O0PԨl.:T`|hs\geWOZaPlL -YbFRTVP }oxGTG@[, hEiZZ"kʣ"q![ ;W6;uCZ#U_4ߔ RV^+Tj\۠"[GB ɒ}*p+ij$:-:&p 6m|b{ym9\t yN©iY]t} XD2A~P:X!6Z!*UR9:(4tbLba%~yB|w.~tW糛N't=R QJ_kjj$ H717UmU;"ĺaaF%T o٦uF(9" IJ 1$Ś 6iǛF}!BvJ<3_U5OHߤ `dapxg(VBs<9i%`d!Bfs5f$JbcL L'B.[n$L۩Wox#3ƛ0szo-@k]sYhRZYΉ; :6BI@$L LkodAINun]{m;\3uU~(P~ZhMmb³"jU-!8/E[6U~: QG>Qߏۿ,2t6Ġ"D:ptI0Φ}O1$Av {FZLz7~G~bJ6BҦTҦm&dlz2ɣ7,_I*iS=ڢ}X ȫ:CGahu{A.^6 s#errfM썉6*>s(kis hI (!pvnb,\O\]aY!cYV@>^|7 {I=HyӘV% VXtXͧD=.z,ks8d3܅uUyGOWܺpn"ZurV>wq%]/>;w ,9{sױ X],v:_Xc]ދq9b{ ldOV1\1|Y]%- y9A> K-h:`o,ȁG47{i3#3Ȃa4>262 ^gC;R}Pb{Hn ϸK03إ}}a1y48K crhXѰyhx/g8B4 1&Ζ#rEfQ@-RQ1,,fʩT-G}&NX+1VbD!]~K0VNG쏗ڹK2(9嚉qdU NR\7(|#'^=yD*+坭ϩ]+ k:do,%}Ո@53Tt8R|yF+4tul`dLIh郚k66c 8 `$ X65%Up:dX3}*#L}ϔ?hm+Kk?L5C:L42cZHq'Z)/Q3S,{!e咟?sVB4܉ 7sel5}γN2cN1yY +n91[ {!ԎTXv#Y(MP\cpBY1X,גW>Nګ>x' 퀡tXv6LG/1|ڇvzCݚk R}`MAQhQNSu mU s)Ԅ: :uFu<>5]0uts?idR*N,8o6G1G=>OӉNǿ|̺28ɯq<`4X^Q~3aj&<%ǴIN.=I!]b.\ӸӾVdsf$uXJH[Bx#Uٽrf'jDf'ѐEfb|av/ى: zes0gE*pwEb׏nB^95D<H9P!v+rdb'X́Ha0grhsB 7օڽvjZ?$j2c0nb<-qB [}e8ӹ~~Maxzߎ~IyҌogDK݃lܘ O-\$񛋻o%18e$,MXn.cq:ͷ{ "_Yq֛o_/C^~읠͍4(ʯə^A{E.{V5sh[Ѵ=ob<'(-/r1V}򶑵(eQ>Uqߋlױ}_Lt6E~qhf]ee^} g)cXyяqe|bx˳ C˖ヌhM޼9'D vnqIA^)Lf4fn%t܇ӏ?%ءR}䠫L젳1ZXv|vl#u98>keq1O3/]8ZˇO)CdIIN&VSXGSyoBVX[WK~TK=CG~+ԕ٭؏|%Lg멨}QcOnQ{ X9 OaTb+vdWNIs{<ܫ9g=~WM h#2؂9?gp~'cW֦{{!]*p>9΋}r<-jQkY/%RN)T  E{84SJT$ϿU8|vu*Źj!I;B)p *eetRMtDCHJC-\m[g*(G ;n~/ߵXfo]|N"FmbX$$eAfԷpmvXYkbLщ(@ZDD娒Z~ZR@ HB$zZۄeuQ62!Hd ׁU(B m䁂/1\ NTw39T\g 83Ft&̫q?%ZOJmg wZKpt6;"4c߽7~J,AU/ Ɠ܊p8{}Ҧ#UL: &˿s7Yk<'@Wxqn\~(~45!:mQoKW7#b{P:c4n#~N?L=`g̈р- e4o12PnoUR-!$nQœˆ bbarAN ֌-Kڽ E#nozgyB}Hﶆ:Sq=p6Yq zgW?/]9ugՑ8<0KlBMćDcćy["] KE^@mI{.>؎/vŠmA|{iKF7̺k "b}_ϵ\ϵnmWs1*9ͬ\jo ȽQjFF 6=Ihźv*|{6(mӺﴪ;DcǮ[il0_\*lOqE(w&-wfTbofW7Wmb;=d>:d*:أt7'd|w7O+5;j)P:xHpƈ@4YR=ՍXxO9ɳVyfӳw&zE9i/:*fÛp ˀC%x_x1Ogzވjn yv$ %\tS/GZ{OKNKޏ@mOV%mAki!.o9J~4OBog'%;+BkPAͽ&¢OJsV`m \KebbQ@!j`.פf>,n[~Y5p'up9m&] ϴ.0 Sh (+ Q|7U_t`_=߶g#QhIih<\ D2e-H!c]xIL&0owD-QnB`%kF55. x( J5HzbSN HHD3HIA )j]DF, \ݬFOup Yh \ze˫1):ԕ1Fnx-u &$"ghܥF@q8H\[=ip,^%>D!-ÔڍyX$[B5cqM1>Xfe !(/. 5sƂWdpfN%ӦĨ*1↾H W4=M'Wp% 9GY-;,M502z'qi5@AtHD(X )O6Z"dDgnV#Δ1{,IE=Ѭ#Ջpe5&-p*OJsPkҖ{"'4\ J`xFf}GM$< `HSͪSɱ㖽PJ&,|nJP\Xd x:gkƸk?ߍ?4m ]36";C3Ͽ`'}}7L~rS4x~w˻frs{g}]E'mӍHP&ReAW09uHK|9*e޼ Ec3*?Cݔ^Pu o:IȎچs<ؐFxa5c<q FJrvG#{[v(Q_ڳw[pu^|F+ ϑz)G 5suح!Cm v.URG.u~$ hF%dkJ&LAMdGJSWLps,k9̦Xn$FIT|攬;!(׆=NCiԾ@jQx'f2ZKs hԬ|ƢlWfwaL^:I M"D/'&*]n~̇ 7'PMר̶{ș]C#[1TY#H{GjF_+JtNq3]Eٟe- ۓ "n SWAa/RX^1}̈́^(n ;HVxK H h̓e)HR4i4 ج ?DEEpg̸jlMO&8|>HgCGa̟( F.tS_i~3:@Jڹy:B n 7o\XYQ󷖫=ʠv$p (U+/̕Dž{FC vL}gU#or=uARg"s iƀt I4 9-Zaɫ9ʐ^@Ak6qu1u6×/RĦ5c|feh5+}\-%^8Qhif&)()<'s`o.ܠ{2_*HT">3'VuE#sDHE)] ku%j W:m-vn^UڒR5J,vr,N" &9t21M(d4DE/"$P똒5pf%G܏,iWRԻ2J\~lGT&zC «YLEEJw*/!T+u\6E֎vADADnQ7$zTm Պ Z KDXbh9^NA\ G}q.[@Ulo-̚3"z=[#kG⮀S?G]ovɥ͚1 wtخqn!jf*y'/%-Z62|] ^z)Cl:n5qM{E;~{y|!?ٸ?~Ơ#\x[ ߲rhy{?'gk-'|s{M8 O?}pN>=qrپ9F:85͞)όCsaO(YQotJm$?ZN+FU#qPe`ٵ9.[J7 ^LW5ڄHvi& k I$ťLջ)[}e߭T҅fO|Ƽ4/T]>y8Rhfiۅͼ@?iN&w8 ӊg7. ,R}BpiU_^%55tԛE7F?p ڭy tidOT~` 6n,Osa5c,eM,a R3E,"=Ҙ Rز>1kJ8s\ Ndɟ:ϖ*'suLs[ lY`=a_P)7ԝkA!ܔS{wQ= }(F5FuV | 8ώgxH*^d a6"-:)` 9jr-WmG,%sgϯ!iX@@z`.פv^FO]j>A7?k J^"+) U(b$pXnt6Cd kI ۠{И"ZU`+ 뮵ۊl[Ω "ŐgJ8BC"OC}{6rl hSBR>CR!%R$Q8w ɩD `JT% *0HJBA57*GgQ'cvۏc4FKDj-Y^yPtw) 3=&Gyw jH 㣶p)@GVP2cu쪤k=5*- S@EB#HXH '<ɜ`VdpQF8Q0"G`bzn`1n86=V`"w6ZZlΪ+[QBdR/)$Z/slUL֤jrbd.7:1kt؎yx}6@09DW4 x#@k/SVZĬNW7IW)9lQl41)Cl3c$_p͓ֆ{jRPF(E`,ͽ{hX{LJ >5B%X#̶SF Linp;D8rVI0'8qC(.![~~8ћbܽu%Cp Ds+Gɉ)b)q`9$@-\"&7@xuB$b(r=]T[߸ErLN,E_>/b0gWf|qu-y[PqW}2eV kLC?=͔5yFUsҋg,7]ިd|"nLN-{ *p@q2䮚1^!򖯽 +玦ս:0߻7pCxyzBW%10)%x}HMDZ`R%  mijiBH;8t!鮧6f*m/Vpb;+aSU b![WqTaqr&TKsa8ï'kK/Ji!es"P%(1hìT~J"]n%FT]gKȮWB ~]Gcuv53L.LYGD]_eV,K:)ْ 6q;J5۵Ǜ)N_޿\/+ g˜+o,ޕ\TRpNa2r 0wzL+|9*=X" U3#cQN$^Fjq7ѩګUWLAFTMɃ,e:Y&cxxEs#81}~ߔo߷fd3r$sލ]FgA}{Ed2rx]7 sxMV(e-Vm!&8( Дv^?!$%N=tN03sϤtx&gx&y&Kb%)%c/22)iT(2< o3$_ *tI"+r4ƊlƊIz^cHr!eưÛEBk3A2$%V6ܩ ے)R7LI7fWplnlƖĖउhlƖ1{4$j-1|\gO̖1ؒqmy~qURH&%[|v-ؒ-Y%/Ɣg,c^Ls21y|`&Obd!;sK2E\jFa\ Vm"$DkJllð!C!olƆll+y>&k,HeLe0&R&Ú0s'fA*U!YHNST`̿P,a)ߧS&内~F>@ŊbcioVb#00텵28>ŷbX|\|{72((Df{nD:LYupA2!eglDCvX)_}K)ߔT[)(UP<]FSЖ ZbO%.($.£ ;[ 42Y줮ǫ` J7zXݔ'?@{?le7&"ؙ/=ȇy }L3,dB$xN\fQktf Krۦ0%xTHbڜ5JXI}@`Z |AgDf p!ųQ?.e{IJ E$vn*М MԾԬx6Et|WHl#)d bV*䙈E;"vE,bd98XcIhsӞ3nr=r#XKE X;u?Tk7Qdywetq.EݤEoeŮ ~k)\N\2W@%L@^8R~!rJҵ@Hn! EOc/7t"Lp,H,_P[4%s+ԘPN,a *hN [-+€_W^ZT?uRTͥ&0[XMubվz4>NG6%-0'f)o^Ks!o8%?8"t={2_^m lp!WGkż8!Yo NogS~H m *cϟn]'ִ0hlN>oy;)flD1Rb偏D ∰ ': sl ɹTb9F> +G+q%;ձ/%Èj.ha@ز /@ DNoLbwAK5"N}R !OyqBK¡io}AU3UH'A|( !%5BO`L|D|"8; ȸ/z0 ;)l{D/fƣlzC0#_I(`b[>WT֏%3jv|=o}7ɛpeO'3l™ږ8k&V,ܕ+ ZO?2c `>gq2o? T2V& &@,ZW[^] zsX;|2/ef[I{נs,ϤH7ҋroLPĹe@󁁴(}o5=J`fqEiyƕO Lo=ciӐLnUo_%[ȸԫ~&~hF=&Wc0Bhl1#97$KkF<}/Fq3p_mn{!{R-Nj,Rx_&ScMI<1u()ZwO^60[Bg`f580 sъC/[!A(F93 QZ[IVj`Bדra:0.c)!f 3AB&JQӜJAJRfD1)y $Q nEW]nц Tg^V-ct)3,ofli+s {X8ο~'քuC8W%dN&j8 ksn+!y,mn6w6N*\FTpiL;* wV+%|kL2邧 u"cY&TDd3mrXA%nS/wgy*`wLD6FW9ySw[Bc:#ñ+4!VPxTgBiKb$(HW5t)+;åV0ߊ0Jb&20a, BiciY? .=dHZ&`Qo,<@RquF4iłDYPax+Ƭ'0Vgs m"NQ`J[TZiVR O5:⤶˸Jd.ʄ۸d K be 9wpAA'YdiՊ[hgvtQ븚DT)FV*S ڃ,؃Q =%,ER*-SKbAw.w0Jh6#/ {1..q+ JBx0fK KA D3i1@ rM 9Se3r"c$' R m(B3I _̔t]4 3H +ִPqFV)|ݴo#J~}k=!¢1o%MCvHs?g+ o;!("kg(iޔV)XcM:C|ZoPㄷbOo`SZ;@Nlw"8ZHBS !P,)JZ쩏k9x †F0 *'9# r,`3NB_q)0w DV`: өF0Bʄ>{f܀6M'YWp3:CxdBh HB`Ky%$EB-g !6 bXFqSs=%C֙EASm"xL;-h,UA[TuU=thFrKʪ{MmW<1uC ];xqU1"WNJL" ,W#7 I*@Vk)9{Ik] YGbvEY?].&ssQA|k#(Z+pzf{$Ֆ;~`a TY1ّ#i8 va쥈eB,p- sbśe@$׺}za|Q깨$g.q݇TG趆K,bă@vt "A5THC-j{ {Ą/;JU<'G@4ZC nD\""XP0#YC5!H(Ӕ : (iфQ-O{Ot ~뭽E>CȇǚA>!BAD"G4NuV| !$H~"Z%Ĉ E\!dueogb$tً6^45@)ji#V90͂AoBlNF+x_4[vvrñT0=o6і#&4y]n#{ڛ o~|@rfq;aO,_u;&ͣҙs\kۑ x%CN0Lc}BU`p`~X{ {rڧxѰUd.  n5߽|EsUJ0Mr;o9|>u& ҞW^4!W0qxX;Y]!0ZV_4Yq0tod 8g;5USLCL=$e! lRjCu.=',򺽧]7  ­ѕhG8Bչ 3N~:[/F-CcK>ҨGwVcX3)֋c7NH(=+Ѹg$%:c ֘#xp?֙nHK#TbrVgy´ש>rϴ׋c賲yͩlyq2qF`_֟Z8:,tZBk_3 uD~}tѠޙ\fz+/NzE-Tʮ?. KjztqedE[@Dz|\CӺB$ Du,ostPiQA[lq%k/OW~bƾ3ZE-j,n^qi6di;"Z{G'[eZӜ30끳k!NRџ.^n:hYkT^KlSv\iN&#Իϴ%kaE1ջ]ZcOf^k`fO[/Gm^0V4ڵ cx,(>a2N[@Co ~j4B)jsU)FX/Sz@8R![KY7֑:ۖśp4=IdεM}PIޡ>džu4?Iθ{hiZI{1DS?L[rm|)tCe.h<80qd0I&:5iqщ]"uE${ʮ;PE:lִҼZkjBk׵me/vFiżs .7i}OAz宥&,UִVBڵT I"ISK(B$?sb[ (OŝId2 Buo-0/v`'qۏcdx=HfԅН7{v sg ؼ`,ɗbl(f>$wh_燏%/.>v1(!O. (+ n:H"x&b /}`$:d #=Ɛ`F#?7os2n߷%0o ιH6y;7,$``%|'Q1ߙ1HOk-8Ӳ\כo^R.jfkCM`ia'Y(ɠx_"4fRJ t.Ś0s(pOȽ-V}/q+ہ:fFjC0KX.v(d[ɉHK7Pnc چM@x*62PiD"C{ϯ/Vp Tl8 L˜T1r RNc' L \ 댋% w D2ŹDj:z["U ]1 zx`zA-фbZEP!3#̴tJ 8Cג9s"a{XmjjN4TNI.y곒߼nnN%_1/wZzc4y{RM.{fC2y u2I' i!K̿ 0sM#'xwjOBc,Ecn$c8ks*nղw>S1{y-[fzr<|p8Ewғra^rV_;\DdJU)u0eRt a!L)M)EgƥL(2R Y`Y B;_)y1O CFdL,H"~&g,v:0rRU:"Y)xR;R/2S*(2 +3B2uL:jX THd)b`eVi,sPT˞%~gmaiKS(I"2*j夡I^Xd%AKo"ګ KDCwƋFjQr/d\q0ed֔5cm$` TJKfsalrӉ /M_@of ˮ ҩ+ͺgNg R_b ;8m9vͧFq;C[Cb:bT7wuGn@sV&~@x? 4жG3)I~0|&o!7L .4Rbn(hTͿ\z\"”`ػ6$pwTzqA 18isC I߯zHI#rƜgh[`[|WU z0d(˸&OF˝Y,`VpIHR{ñ a lw{6c1w-Dc:_9ثq Q(3JB0(=BxB'BNu<1(D>1Um<ɱ`L-(Eؾ|YL2G21Bpd#T:{s<%neG&IP$SБIX`\^*b8 Q3M [IU5\ȞW)HqAeDS# &Z}衴~|cV=2iT}{ Wc d3u0w_4-nͻ7S-OL<A(`Wo`qBV<~ t.cj#ڣtRz(󷿷! 3 J~%i%P>c6ē|וa&|JKރ#:kLKRez=;Kk<5؏1z̈vc kuKd|CƝ,jbQ!\Jp X@X+i>r Ac RKkpy]CsDH{ua_cdf %\5g*>rEJZTWVͻE{PCoW$g '33_J%p6aUb5eH(F9CE4 H;IjkԓTKFjЋ1D)p ec(-GeZ;L5QͰS!ND( x 14GR TPLM6׿񇪾%ϸݥ2Fnwik*PIG;(Z3Tp9YZ0۬jhsN9ӽkjhT`_sxs().sUMZ0BN4`9@֬ 'BAd(ZZ=,c+:p,S,*aj,Y[|׻v1S+1ylft#V*@Hyi1b(:XG,Ck6T!e$Z/#*hΤV1I$@Hq:` 5:ZJʆ>N.sl[ٝܔ&!N}^$0 wdZǘ2B bEi&r\x@oQ!r)ml=ǽ|xWz V4cqU,̫7Λ=&i^ypuM[׫pu_Cn{eyyzZ\e+X(PT%s?RMnRE)w& Ԣ[nW魔ҜC\|0t|v҉Jrws;PI:/[a7SJLʳQK'ؐJRr׭RIW~ӣJ<)#А|GK-E dV@r.@+Jɘ>'n\w z!$mhwn!*׳(QB7ZS׹}8NDfTXT`#)˜ԇ`%w7(^0/I7jc79чb9?6-("8VJ|X4@ABVJth%5@Qcs֔`ZY;TRTh5Zܨ|s;ji 1~F\vrcwO\y}Lz>Lu5RnyƔH[ϖ~֬G~6"]ो-7͵{9* \[ 59f )oJ\ҬQ^|=5!#j/KXo\כ]]~cVvz3NQFiE",7!o/XZ+i8Ehԡl5y˦Z:ChF@#5 {L֛uƒsd::O2Y{O1«CW~eYNnOua%zj\(R|i3FjiW@\铝Q>٭Nw/J !Ej_-K#7 lTcL `0Ǚ hz \L e飳ۛUtgf>ڸUD 6w7kPk:!D!ELQ26q FEZ4A`g?+Jфb&ƞvf^ߛ{Q֯+x?YDGc?N;j)*~8VqyB-\q-X[~~Se|9 zҧʫ'\TIG5ʉyLG߭Vˬx탦T||67T:dO&{fܰ_K`[?:2{ X[ "M/Ie oWM?̻ p.h|;Q|6O-CmjCؘ7J928)HMD#g?i3F/~flW^pdNbX P>[nZFOq//>f_?Z`oOܧ%up;=Y×\nhv~1D{9 c7Y;ykCHLU$1ЦdNF03,*͸Jb,CH [</ 0Y5,x{ ^KLw%V1Mbo T1WSH`yhF4"OFsN&E S|,WWf~0PxV/ n z[k?_%OT{/V:ͧmNes|IɣkPovr ea뱛d/نas]Pڴqq0}}M6ҤhYwm6{ͥ:g|,zҨm$҈Qq4I%wƍd G쨅h7&'ZGej1/#lE;UV jcycz"`T QTڱX?ۮ\!Osh z ?3xL?YfhMp.'.a"\l_]\>o]# ?{Ϣ㶑~?Cl$ &013I&%O& UDIDlJs+P-vUu{σˆ = BQdjC#HSh4`ja`N\9{m1C7WcxWۉ3wFB[h/( l.a_8ЃRӊs~u> nJrJ۱vr`hotZ N)2SirmbVXHNRrMj9Rܢ+74zfh.hhdw1|T2um_m5G|Zh7NJ:#lnY݄[o^ vF3;r;N@ɡ]ɘ3r?pVOS:ڢФogOL "L)~kU]ҳ9-(l7`v<}Hx|ݻW3Fv{5O~AͫR&^WA\ _F0nPEJ'ZY֛)U5OgK4_WIg<(fʋ=VmJ~¢'ѧ o/ڃUn`u&@)LǡS) 8ںT[[^Bm -aAuEB*oM^O ,N6r9mi ļcPw%ATC s)$4_q8@@a-b(|.ګnE'FχS|.Ԓ A֔r)+W:c!!ϔ@<'T>>jeI?Xd~aӨ%]h,ir(pja+9{]N[HS9.ҺaZ=VZZZZ^_Z-"M7Avc Zƕ5(:/灱[rŘW8I W*h=o^_~+}oYx@i 6-mAG-`Y2}?m=h8WZ` ^hr>X}f?(X4ͻOAM1$J&+lо4y(,eL] ϾZCŇ+9\I7 ׯmXtv!> |;u!am\(+ kDUX\5ZF+](17b5%o:E,qZ % =sUr$`n 2:5Iqqms'i}KoFB[bIbJ^ -]uĝ.5ZRy&Unٔ<`Nf4yA嵱4q>K;ԆPڲF#\ՀPU uPNUK~%TTKѲ(6 ţW<9v0LLZb>,tum/x4ۯ6[ԅFu$oB.1yCb V_1B1 b~R)E/-o &\bc+1RZ#[>wX #tblg3ɹ|Z&D#όؘqoDmĄ;#Fc'qYsCbb2+C.Rsz )+xk-/M B1x6*tSc]Tα:hw%dI j[?ßSb }y b%%CL/i<(05ZQ:jfKK1gQ42HB)^$W[Lʒ)&g(7WtOw˺ԖjkZsʤO(:O෰ KDE%AvSS @>C?5!ęO&G,~()%0vhv!Dg*~DM]uX7;?mG/?mni)U9RuOOwFۑ d3ڬVU-s6Jyk{U6\9> d8 HbY =q0 "DDX(i\(UhR{δQ#yv`&a/Èh"&8gD*mB!Cv ̩'#jH>5_6@OsH=ƘC,E:Ñ=-">6apJG ^1A' k"@)xF<qKvv>&X-8#[䛧3/nΞll˜\hu1j+td5ǨmⲲ+_M^u\!0 W2oVJG~GY⩜fgvB0rf`@B'cN_ae`a4嶙Kqj/nd*R;l$6 `4rdE@Fh0Zv+DLm^N?&Y|z~P!yuRwqC"[i);ІЉTjVbAn\đfx1l=,@Òx&]G_o}{Jj/$y!9La&?8<`d*fxj=sk.NSp$Jm,Bli})(ʁ^$13^!9ڥS+a% :0"$ RRpttI{}2q;Ͽ?3Eq4˻+xǕ܀]w+,>e&~8b 3\|Pa' @ 2.( oqSzsE#㊕b0!xК:YE`b?/"x.|dm/tUs|$ґU9*l8 WK.TCA ɷ 02o.B:p~V+פ!̜"= WXCsoNp|cߟχX؛0U*vPәr0C  u ^)ūa> F 0X J/߾t@I'ⰑX&粒Oam"z>ĔB`tf^Ir"9[P/:ZބLIƁ1<=9hkQkFY< J:+`'TwO@[IϚyn "G@"꤃aj x•൜=N8^m?8J&'(]+8 g? +VlC+-[c##B9 ;<K1ׄmd:o/Tp=҇"XXޯ}ޑc$GZxԪ> bBJaFMhpc齍5<:IɸVR+b[`@lM{v*R}),[*5sr,}@1Q Y&=%!rS=!a\%TNŌ9pU&'^Wq0/2k1'f?Ѡ9na+G;lWg-DM'q>5K0Ps$(T8pvU ,+啷2hШJ)]'z/las<>=x{Su5 c"$=%qցPLpAUբK`CW>tц/h~XIi^!-?T^. V]v[haZ$L9wWK,pI%l̒8*plGb1 z p/9qkSUC0QL遪GCm (jB?{rQ' |w)}Z:iu]uj. ͕B9v]evG ^rTyji@RCu-9jt}Y8e י?/>sNiR)MD4!h8QIfNEQ&YN9"e#ׅ|g5BtNJ,Xh˚3!{{/:o(GeZDQ=\= oȯ߯^VQȻwYlj1o"b2"ןEq-ʫI>,|BILX;;?.Ypbp@9z|gp&R`ndYyAw 5LQJ!`W{w7*ƽeoѪ`=p#[۷q!6 +d*s8J:Ro^ZV@PhTty'J ! jy(vZ_C3~<2-bYX۴дawIJ'Dp"6i|\b3O+R#.[r*>lExRSu$^P=>#Ls>pAȁOĶ@FR$U`ij_kIv3? :PM0e [uޮOɶAF1B]+\^ . ޵q#" ~<=>&Γyi44!%FfRVn,ɪy1`vM;@ hiRvCPb-?TՂn滱怴< ܂l$(PC㲣7 a3$wI.^#k],Y Ƴͅ b툁rIg`L{7#sVCfwfA2SpvksnL;-#N c^u=OԌɤ4S99yiÊ>H1 i[1?ڂ~S&t)܂z!㙗GsGsH7]ӋB$Y@i&X歆20ϑ $27:o(F+K"3mc7P$Aendu?e'57{^_$3!'Dm]N,qANmȗġņ=1-%=fx%+o6mJ=x&;w.xlzan-/ſFtfP m5)Mk'!A- B `-0bC/<'WW P , \*L5"H9)1. v "B`ĽZpA5ï>D%w$Uh|75G<F֑=p14E{զ9Mb2\vh؂rFؽP$etcqV8Tyҿp eb Î!Rn<@:Lv-0${dRBSt~s-b"!c3@=W 9nsaDB@d]`.wsEEp1`'YH N3n vIGT؜Owg MW;6hֳ^4NZ#E=ē D *9}HR,$ 퇰lGMc|\D7ӎm[Fd B8KJKCmBasb-M8I9ңT12UaڒZv{G..Jpl"Xy c?\%a/S 8(nQ(P8Ǵo3w-|h#yނ Nho3\_0F>:kA?$Yk?Sق^@TTnIGEIw95[=xd\{ p j`Rj"dHO3H󁄛p&ď5_D a eawR) rin]~ {J8?; E o\2Fhܽu냍^B-`z*+/t`7-K !r2_ {*zW$9(8wN;jefwۻCJlZs;_k Aɍ1Ho@>5$J/"(:<+E]w!$8Eh !:8y(`g f &2T]~\< ;`UV,n^\}*fW╯V:xmc%]W;?Q\"~1l_r:XvL̋Q/Kn0P}K$V `b?C[gpkOvJce>nD?bדk~l+Vio.x=}b#O| 7cZRo-IЭ? Da+q1v(3R[r z7Ms&絖wvR Uq.}%_m1~ Z;-a?S;+-<8$w+9d^^-OqPwwdRL|hF}gi]Aܑ7<ҢOETMӨj_8L}J#s!x>i}$r.V&n¯O3 O'sﹶ~{&R [!z0)F3|TdktLrF.zwZ;nޜusI2+U};0uϴ;Ceюߡ'x-8!27=w\BqR{0ǫ?'5dr%.yK,7{~1fa4am&\ :$ҹ}g9Őj%sM@UW3Q $JT#B%PST %( $- sz4 i̤L^TR!BXTdt,A쬍p䜊և %.%}iz H)4\vI)0/'TA-4AL1O r }ՈzVs`S 82ɐ·j0ĆEYcIW7C%ԔRc|NP 4.H)8 B !TЊތj7Sƍw a"Q5g^/y輸 {PK$/P>+ܠg^cq"NH1:?4hb%qqXC崧>;;n)T*^5mbt~ё_B~ssldzuO*1vVqR+ȭ+"e]A Jh V5%Vq2|=M!­8yYt:Sd4t8*d\ܵWf%ߴ!cp05,ԟ>Xv0nR"Jc5FclC>W@}Qg+l!Wѥ[bkT" (!`cLԳW{"IXKLYeѝb99x۝In1ե-  S#ENݲzlS Nj!]c*͂CںzTu(h~'!ߖ81/#)@.|Ă ($EB( ;PI\)NJ9 Pf FoF|gFsÜhݱ-͘.߀ )JZ AQ.LPp4pk|S:BʒkP֍wFEEʰp Z NQ}Ec`˔DTX8F(Lgn 8拢ki4 %I"Q' 3BC*ԥ}VR\k00JmRBbd,5v"aS!OݾN\hCb&ц&ڰ#E N&ZtSyz϶sz]_bp^2|S;$g~x"yFMaNJBÖ뚘n32eO:ܲȁw3ANA605ci9FlNCP$FEp: =Z9FlM?|nA~;gi: O;V`a[+@o^4TO[pmAtyjAQ[µv Vg-Y;luX1+0 TB˲LH[RDrc\ C5'5\Gt\"c5Ѣ%@dm\!¥q ǰƹi bM1q\|['I> [Oal5 | AI(F4?hwgߛx0R0eN}kk;ShZvxLp$H ӗL1 k+F ܱ)Z_x_#@0.`KUQ"%1H0%ar_,feD([VGmd­rYk\ݏ5Zݝ}34*5*/pxnۘ;u{_v-ך̎x"/\=AD%%Pm ᾬCf?]b[VtJk %dKQ)!)tgdwpS, H,\D% g ІBj௭N.pK$*L!+ ::-q!,GLP(``EX%RzH"O9Ȗ'̧;Dn9gU$̧-Y|I֎ ]< DȍlQ%"8gspiA`Z9yLH8{%Mc+T*N'767{U,nܥ7qB@"Zk *ڗ'@& 'k]RO2|׻%'s]+Ӓ DއL‘wd-CJ( 8?i_0NК?9"| igpD'UxnX҂nΗF[P 6m+뿢{J/N&6rɐ Q$UӤw/@6dI6H${i&vY XDY: ݑo*szT<| #t<`@UMAcMdkw]'jasbo4gRU'ZX g'?,6z̿zV6_ӻm_Я'/E>eo?q rGtIȬB MyĻּ+6JW6@7է'_7 IiTu~q4䉫hN9_lcݐt¢[ |T't6m@L9շu f׺А':sQP/{ cRn2Q1X N[[0-:F6bm0um݂n14䉫hN1BȾu3I#L~acmq`Q_Ȩ%Gطn$v* u13:nZU4D$G{M@ ߦZ}"r}@F}5䉫hNᎫ8u!V@̌^qKl$)#֡h h)+ d6 @ tN%1n%.Uo/"vL&)|%_2:P+zUJokV '. } y*B^L ` mLY^湱)q+`C{WiP) $o/ 628F6y]). շm y*S"HnϺ1cȮe*/:73]GА'A:iu43#2Q1X1&pn14䉫hgXOzu6(`ycyYŜ9ZJ.\ϧ/B%3Џp^ N\e{Kl;:Sf̥zN~Ѝ]gĤGbkfΏͤtXYzV`׶V?~b?憩NѼϸkISe%U8#%*tYQ Nغ 7XխpO.Mn+(+QœY`+J2# -9$&}Ui))8;;HN]"Qe^5҈Q i]p1.d9RTCL\ۊYYI#~#jQ诌 &x4p6Bi8go|)-|u6gۡ,^g۱OU>o$NyC2piy\ƠebbHiA?v: @"=}dE.cݧDjdyiB5a }ю>{n[ۖ>^kGC=}nXO&aVL׃õwzѠU}?ޛ #[ Fmk-~G.G!05x3'C`)N1~zpR(h(; a0HL!I|V4R+_rC#T-D;fK^2>2F"e~hmD{x6A"Av+B0@r<=r!),p];nKS Zj69 NcMBBkvtm8MsNb=W^1id bqjwp5SUxҷg&揄AȣC9P5&Gn^BD WֿX69w>6F%ΉY/?1{19.Qt3Ur+,(PV+ۦQBJ)q 8sP9RaL(`( ԋ,mY[ ԍba"X3ddA2=v-S.!-5D#Ya) XIε$,0a* J¼Da2uj|1uB&o:/Q a`*^C8zctj){8}sq?JΰMr^@G $Y8:KPc \+[}p08& #1a°ӟ4$q[=aCC}X b-f@ à d/1|{gK C}%,qBp%,lYǘ%1!1!1!n_.*܁R3Vŝj`FV*`nF7aܹ%}$,7x 2Fi-L6yiֆ嵽e]}l*<:9b@5ww_[PS]<ցX\{bU.BP_hvZZbaOU mӥ=ώT(U}\*5#^rGkݘ1 oG񵨟pp}=[> !;=}N}yH;e$4*=f[t׻#2C9c>1f׻܇DqE7B4A wEς:z/z5>MsJ%fr$H̳*#YQ R)(>gC @X[A4 doQ{ތ|0Y:y>>:fՋ.?͛+& ٺ=qBH@=Gyq!bNq z^:rNK8]B1&s>ԏu3X9hX=p"櫳ٴ8}f+89ێ<ÔEfޯCԓyz2 ޏUǟ-R: ҿ aN/4ƤxdRyp!MSd݌_> 4 kI&(rio"vY/8nKIG<2g*h]>LNF_;$)T)R2)Z`1D *So 5H+m1ZjՀ4pi$wFf+W:|7tzvn:o>?ד N_ٵy2-DRC~wnQ 6~fGg[Swf꽃"8X H^t.!Hk@t]B;Z1~zpR(hו(<48x;?Ywd(d)}{ 6O&c)HM+ p91bk#Җl2Jm0b>/zoxvn~oG` ֫¨b ` ի)4ʾc{B} AB0ܽ}0YzFŠ;"-%AH3':;UԼw"D ~!E)۟o:0lh (d>^,_^vzb~ZGE1!Ն0GNEBHJ$*RY $(alg#Cl܈' B(MY, $PG1J 08քC3 9@RVd1bJ/w }3X[jˁ5jy3hɔ Q*֡K\cz. 7 i}u iw̄o9Ln>E?.Xfxep}YzX`XG=|x5bj<~ ':)~ghٚVөMpZD+{C 'P6>T?#ܓ [Ȫ){k#=Cd~(qtz}N ,֚ ">Fς2gt3N'urRMy_3LF=:)׫̋.2S'߉B Z7 BPL2ѢO,1_4'۷#!Yƕۨ&0XKD8$8fQBŻ؟c -VNu*ۅ vvBjD)zƸx$1ɂ^ys)Ihw®Աbˢ./mQKJ+6D1lI⮶r9}԰ ;ؽaΜ{ a$} T6گ~ꂋ2 ].Y='H=Cxa(/D|d'.:p N"fcPs"G1V&H+"*Bk"6j7ݮ_)GM"a^j5Dʄ 8Aq1Fbh8ւٷdkp#5R=v3s@ZZCkL8KhdD Oo3]nSgZo xX4-500$<ᡊ HٷmA7w^OY526)bWn(,mVKME ~L__ ",Ǔ/dY  -ƛ KRs:1Fо" sN a=CfcjI(d8)JHhbC"m(4ZaL]`4Z9ſ{\;jDh,=׫JPTn~>_,7kmt2*T~\H7-kz)}Up~]M h` C¦ ND"RqR*RsLNUP "%v?1a'<'.kꫳ(9 0FT h"bRRr0L#C6!;aN!خ(O+ ~jEDdHY*3 1I8b`")G0ƉPLH`ATJDTP$ v(Q)+'JSP6OLKS d -Mɡ!RH)#ad7R4ewd38& Z l떎~`|aC0λMu#G9g!Tx|[1fTcߦħodOקK]bή*Ŀher„hV9!0>v%0+uə2,'"B$ Z!S'x1Hr_N4k\QE%kzY|"wQ"ہC N ,`a%s%pglgEˁ쌳;.J8cj{fww6) D[Y<5T[gձlxcWZ|ssOFyysOq~4<@AG?xeB=8mXC Pm> ZUFZ*u|.!=JlVK׬jg-qU}_?Dƃ6F X2Pr;I]sJY khn3مcl瀐qA_:Ԍ(4詏 uQmf95&\cT*_zgPAB-`T=Ω#Vʫ}boQ gl9:sz"$8` Ie \gxK\ǙB=RtV]d%7}r2J_Y܀*,WQC*V_K+%*s*\81FQ΍p-b є2@%x)cM8D1(EaEvڲ*{ ];3}2Y׭fT$Z2}z6-U"r!&m|gjf*z*TRhZ  }6k*ƳrbMhZb6 U5AO'fk;HP&Y͆Km_j8;uƷp4KwlNAOs)q>Ugv7gfem2pΊ廐{^? UteUq6M<<ғS{+xko\DȔ(tq*e*;g̀%ha[tǛv\wUnwgmǺ+U<|eCBq STJ;Ҥ`s55@< uhYQR]AsB/(ݥwn=Jn`/Rc5׽aBI7廉<8xּrt?izN*Sh]06U-@bZ S'2;I>NOKFCKo+|i<_`'P|M1\DApz j5ԋdVXT5YgI(,L4Y Jɫ2q`?K3fO'U (j/c('~]OA[MJd3^44*D(ѐ)%7э! jMx] iQHUe[rvJT٢(BH?܏gW&`!am7vwv!Du^0McL Ja!q(B 1Td_oz9Cp{Ν0#Xa6`J8F(F_j/v\ PʖcFB[MrO^x`oR3)uR=Ruap>]K^q$=O(ۙ~K]cf;~N;o = qw|;]v\z\^$dHhSdzvF;:h aMiPoG466dvNs[h~UG7Z4n8ea}u), =0%UT$ `ۏE~_d8p;z/CD 5C aE[݈Z-S ,zp׭ dh`[y :*H f*r~gArZfEy56ʵY̓b\_ģ=*}\}R(Z} f_Ǯ@ ?bdRΦYa +c !BsP%@(igl2Z՛Tcޭ5غZV0%1Y1YSd=ns#$hi(^\TKʵH{ao"m^vUCԈToǹܑn>E?.Xfxe>0kTMvq %(_>kcv5f`?ap3V!ߙ9n5ńخ6&7c3sˑ7|d{Jt~rvLm.rH:@jbYzb{r 'S9#)ؚGʭgj:Jl , e&+p;~DTsSvğētБ=lJ5-62Ǔϒ3@Gd, "(ѡhcp̴$ c|0a xpin3gCǀ26-zv xG9K?JG9JSEqs,Q22qadUD{q: nAhƙ3'%@P† e8ĸ2oΔ3TK( !ILHDv kH#Hpbc!Q.3q|T)(^Ӣ6,:~ RBB=I{w{`zˉJ+zU)s{W^ (4C40ȸKv-tG7ڴݲԝ oqik1`t%coR\iuo( LHgN4P$ֲ/ːT֗7fOnFIɈ$Ա;ADR c( `s&013uL6`akO3_yNzHw"9t,]"+0cXBwem$IzYb vzy^!O6-I 7HKbVQ,6beGFDfD:a#E+S8d`~A 1ڕ_eU~&4>,{J/\4[ $=n"Rk+"%dpz0d D`eB B5"F`z+I7\4e`e\qYDs"Y@-,Q@Bk`B&R)[#Hv bT }bf)$Um&}ƚA)s9_Z^?6ܓ&*#s8| )N>|ٳ/Mm `I SKLl+ݱ2P d%ZvUzoL߾$(xgZ 1Ƴ`9ISƵش3ezA>~1Ƃ" cct{q7UFŻ=iaX*$NgS[#m, 9Np3 ӧu=sAwN0xe}h5GZQE-)\`JB) AB{@M K꼸.KZ!>JGOywS9;x ;jh2ԘC UIkE3o O(})`?PdJKe5 4 &ty0 c*hbGs/s\=BJAnR2-I^ԌG]yG"VY{;teC]VPwpE=N!.(SA+m.8(*1SgFY܁z~(B:+f^(MPG w)FX2#,Zf, JH1`mTFJ*%rĨ&Frk*Az#EHHhD2<`ivS.1BT#FjǢ*C'CPf , u5u6Kw@R"jFE8@n\/WOn HmªhoƢ( c-H_x"'&_m}؟ns9@^7hB1Wt|c }5JQ|Ï_2qE϶ʛ?[n- God.׊7ض@N;P.$êdHi-`QL0xcÀf1Fsۑ!0 fє9Zo_8Z| 2>BixKeAkÀSJbE :rI1v;H~9,Q5O9D(jlQpW )|tD 2Q'<:l(o =FT (:5! 0>iJ$SJ m X,v񠈘5WnEJI J$mT %Q귆)SJ,%duI̟Z 啜jLUR2EI]c@CnEZV;&lKij v k_"(seab#".EH3=ɸ ٖ%0$\)Y&k+K4îAG'>yT<=JB(F݌j促9t^'" ЧTn3P+EϠ<`NS=)TCk6]izD-` St4%nB*Ҋfʏ/=M@5>GNlg ܚS}k]9㻛)D$Bpe<^azW 3ؒJj#&c!  IZ*ko6߻~o<(FE7Q+SPscCD (ͦ+_kJg"kҘRiZCdVo-Y\o-Yt|7Kʇbu^P)'zY`} 3^j7m)|_&[3.}L`0OԎy[ 0kT aEΠF8Jt{)8Gs[O/>ep5t8_~^ W.!pX~Iw/ڳA 'ne(&e^) newqҐ۹ =O93<¸cWs*D `oFowiD@w0mp3mgA z: 7gvrm_(AGN6V9:*c?kex5/=x Fm6R8 zW;Y60Ou8d'wMC=2_}urS@rKh`7`\x?-FY6)yԜN #mqv ٰiWSNW]BQ=s~(B2RM٥4|̮ZfPb=`H+Um@W՛ ,0+ip(M/sʌ!"㩵V ihNKaZi%RFGpЙQ95qˬuj>S8Tm-_*oשR/誴h6.oHr1X]݅xl|f+ѫ?Fӿ$Es =,S˕l R˟S{33Xh3ӄ.(L 3$j + .}2LtHiޤK\k:ی*؊YǛd&=q*L+FnIBs=X4JLOF *?vMxލ\l~9 8&"7^N^5<֯?p`C',j)ŭf^vcl>Ȝl A!e;7}5gPƔ( yw R^3YtTna8, ƒ xa̴^,` A D1ఴ! rs{C?08YY\8F_Z-+Xy4alZG6;n B[ɭBrosl( fk+"7L{aD f=X{i *0L(t4KKU#ā +3 !J!}i9:ǕZmb2 (IդZ]]k A^"/>v>@R鹾֔V 4RS|KjJY*]>yo4re q(xD DTR]7yiS5m \m)ɋٺً @R:O)QXN0%;{3f2\|5:)GU>,oÒ lO` WNHhv Z11AT|(qf{N39Kgg.mvvodt6&4[ r&"L|;t灋WŇG6͙ʭWU/4q,$;{Zw2om+xkZljYMŃgsrقq(](!ce^;{[iI4aܬH&05ضmBmIW sgb?Jb'cU#6ŶxİG/;qeiAMOެǞK2{^,^KYEn7/+v1EI. E%O,IT겺9EQ"vv_|g?o_i{J)ލvv}$2Ie˪d_ƃQ È1C3,B錨PgNs$MG&]U \}Men@y1Cl4ͫ-a}Gy_\Sq1]R6km{ȣoxkEZ ET'&`㩆s8 4OA-m-b]jo76oY*|ݼzd/VSBsףчB(R⳿"6b#y> 0"+םu&)/%}ҩb;kIMRIax OXMTaМ)w^9c,Ou䶚1f% #͸]YoI+^vfg(}4݃z}`_f!da-Q^rH*feUQbeEƑW{Y=mC2TC- )7F/^._V+W=Ǡ1hKu5)v7 ?Mɛ8ys'on˓7M˜S1) U0k& WT3*A>HƼ07 6I8Ɗyg?08GB"7Z>?n:P[`f;~Z~w|7 ! 7^JȻrc*Zj1rV0tzrlHH0vglӬmսqV<[v `ǬZQ]Lh\KISqҭpk&୪Im(ɱ6*k$8'$j9*sUAD2{u,7V25KRg4Oc}x՞j,R-`WƩPshg<8(,Ig`vH:Ray)H9K+}A< pEJB) t<@/DUDR]#'C๨ }HuRb]0pw Y0ȅ 5?2HOO@#krvjYSB $ZB8TXk-pupJ | ͸߯Úx-P`HF;" F`^b|/0p_^AJ4Ζmy,HpE,,$eW;3}`ĵljq>"!Eo_ 8ëX^bƘFm^knN߹v+)F%hZԸ}}!(h?SATٕc6D> frǗ:(9A ^G)m:(K'hĘ%nS wJmޥ{ _$/ & (zLQxEQZak+[)(`7W<(5Y|K*fxnV<>O?+,q|M|s/&qk_s:9iC1tCihd0CP 0]7ya G`@XXi ZHK\JHfTTz&)6@Fa`RgSj Fj-+xPe4ZB&)$ 򔆜(p d=-whihgQ#pX @$DX I"sǠ,80W(ιNea2v M kbO U'r0OƉQhabD#CD@OdCZlurH/n)_ۻNw!@"ăZ `-q$)S`{#`/4 ¯ ]SP V@vHHxqڪ\# kc |;5{&8ػÇлLji/$VR_xC`0AD Tۂa@ 3q1Bp1B@C>r©?`]CB)3I 988΀, KC+tsb ~Jd/q*泙71{مKo{z $):F*lyޠdI-eJvД3N}UZ kΦ`/Kߎ+׾/uN͆w"3NY{!ȊUeg['MPiOBv{ƟfcÞO峉w))Cm9<ѰsgW4#Ţpu:aB||P]QT:3;8iDOsرO)۵d4/>2AP6$*QP?%1e#EJo]%%~j_T~Ar:7FTX{qu}yc|YV.^wp-g] gpCkˣ-}gV`4mg}LKK<=ԍyj /B={:ws5خ5 _9^O"Tl }Nl#7TbsƔ^E&wWw/v`Q$UU]UěSc9{,p5wV;n{ D}LvwP7)S띄-LfG٨3}?GccX7NFƑ{7+Qcf/Gcղ]@8 M6w%Dgy7mGU+^CIz'Pꅨ'Pz {Z#RmIrmpgTz󝬇(ˠ>?_\F6F)`~x?7g Bvݛ*]z~A2`G@x6 ѵ$L'ouwWUCW~Oꝙ8'!V*8 gٯzA9! EBsјwK W'8Zp8EqNz߿5SϽ]d:;E:~2eFu0} To2ys $7=%XI %j->LL?F/c1u !f:Q겅loJ5;9\TeLWF<[etBQLm'5M|sJvSZM&: &ʛz&,OwqSnPRJZ4YPidp u[n"7jYQ.:-JEJ\-O_ I,'zu"K<CBŜW cgbg X<(c2q-)D/Z)T2pbFL!sh$c1Jƣe@/U,,O8|DDYTaUFQx+k +YX*.?1 B?aP1S@u{R"I%C2yc}ֶXQa 0{VZkBkq!,Btc`6lk;|櫳K Mg+vqkuq1G3Lb3Lbmzw;i\ w:e\eg\狣:_ |qK|q+|q:b,DJ~c{޶ )_ _&8Iڇ].c5'=?lӒPZf0,Svg3C8#qmrce[έVvS%_+vvc/ss~1>: XQQaZzDdCR?͗.\thZ81׽җg%oXfj= G;~PKʞ8Aq;051^5&юPôS5.uև&kB팭5縢nO`P4;MlvC J|f`潟{-B{-(tHn]ͣ\/59EIKxw9wi,eRmx+^Ɗe|Fi p$5,S\ ^g;gisAH9ˬbH~ y]Si2Yh2t@:vĄ~\ 1&S,4@sNe'^=WS5!G΢Q^S5!G΢QݨbqPubXE*!TiZؚڭ 9rJ}Yn'Pαv]O)۞_^S5!G΢Q`sBl1昏",ĺ6a9fnsmQI)eY-6hsGc1KscnV4cVxnsGcVX1+qcnsIˏtǥLN/^'0ߘdi.ſ@+@,ׯ~*zD7,Y0py)`L%ȅ8fAd3r˹%hb,Ⱥ-!F3)a8BH O^Zy+9G? XOiz$}aP5?>pT > l9ooӝ:;휛1< t!8Jm=͸q h*F ,qiB }?38RdjָG`r9{I/]RB^94 ܥFqĽVgBo.]Xu.VTN,kH ^,+CK˾|9)F!r҆'7c4lZIo4\G 0g / rћTAD+ɿ$o߼fOj&M o;}3~g4Gؿ!| KvAo' ̻KLcޡqyL%hlX| bpBElW\(D^-Y%+6SFZGܱDu˸S!uN3ET!ɉd"GzlnFY," _@{2쇘 m2лb]޻딾N%1_?y~ Y79f.>mgJ*?,F 2CCu] @!^0He=U@{Kќ#(c=p}kqW 7MjxdI|Y# VSO&^Y.xEe0h4:Ir?nHU;?8GWB9>Oo?^ 3D/Ab}n5Rd8E&u8at\7:$|-i:2Jfѧ[MLc, M~3_$C~\jJO$\ 8t,|n^@,qW>i,d%)#GV:'R.ӏ+>1px ʁ/.h0 j v^9bԇ|75f39K{N/fOLMt{?|N^qBsA=r>FYZ"yf ِߔb%l?LO)J.o-u;hj/QLJwգs_"=SllP7,շ}H\"F Q ¤{BEM7DV)|Ctnv)*.ZE3  NSvwf6)d`k)JQ9_r$U37u j#/S%iQH1\1st2͜Й7ԋܺ,4c 9 wOWj0 au0Zhi3rݲXA!,0RK,` 0KԎsN?D6:ua`\ieveL90Xq'X5.V"PMQV TQL;}JWEAh$;Fjbtʝ (?J/8<37:J.uA*(f[ &4ތ|hMLӮ䢚9eZ{=ϕ֫FF0׏FwZ}rP>">"뗂+ yoKX4N|1w3)5iMc?h&?M^{ٛO,m2WӟDb4ON'aYr*el!vA?Iíӫub #[ϊ%- PpZP89Lwq_5AfK +.\>r_?i㑷0-~zUou׏) ×GuƋ~b8Qsq+O5!Yf}8d\Q &$WN)1.nÌ?|$zz9{Ʉݝuy=1g#$ݩ"rA!uprNr3C[x){P$[ݓ."܏ _#zڏC>dʶx+G*)gJ@JfۂuhE:[J?η̡Ih4I" ZKY{Xp} >}]!pK6¹xģ\+T7uZ6,̇u@%6)ld2S1EY/v3ߣd[PH:t &[o@'YN6LfKQt_ J!`mf-7?cOpfMEx S,_T+`.S;-` [6 lz2eM7J1)5bZFz#̸&H"B*gRC1Z!䷰%N߰*{OIDYsB_ېOb^=ͱ9a S0*FPQ30C:3ҹJxI@ 1Vgp ;1`0&;:L y+׀$Ѫn'3R'nx%06 b hΤa|Đ;\I'//+*]FSG)Z8$R,^JcDO LWZ{}@8(E8?x/k\cz_`KҀ_6 vf$;/Y,m9No0}KUY,U$ە\*;W80btu'?Kh{!q& ˌ1 20*,Sµc}wᢀk? p.Ldj@XD@;CC&%gʳn*u-jFIS.d3RdjNę=0ɋ&Lv|B ӉdEYG,},LY~/S.wU!0lå4Th傋sTxA4˪/"FCE(HuayD*8@>] |!I1*oo]iݑMGT($i;Tt;Xvk8&SNn|\}uK~TJsP沪5~u!clOior҇w^_ZU3h{?C^3]Di(%U.3K8u1,ObZt8(]k6f `; ifg_ uw]mZ4;V[.uCZ$jK1#KS&Ә)0-Sv_y(!_ucQ{G'i⠘j/ DKrf繵GdR$}Z) M^~Ϻ92f[`*3?nHʨh^ Bժxich +E:R4qHm/)4]$kLÙ@fMW|en.0,8cL3K2S(/D!34!^5(aP$ u\xQk"P˻=S]TVᝇPTi)cQP(,b3S 8{x_^{@V 1턤c>íG Q%waLm=]x14 8.2mRF!mRFՔ.|n \W8f NJOrPcD|4CѻMj t%C3z>gNsWJ| JMVD} eұxʨd"ЧKc\Qp=FU]eJ2 0;ΒK+cV3n ĞhIsZ:]x7gDIqmYȲ6x>cL{y/sAjy!q\oNǴl[-3P麏_ _!ȱZF 23D„67Q'X#)DX:rP sa'k3AUpݛHNɘZ!Q}/5/tS"ks1A-±҂iʷ O֘B!b[Fla^QI^z(W%ՖB熂 tDWr9Ha &RR8PQS0#7ƬrfA@#.H}1>G~B,w";B$Js#ZXqPtEw2`@n6SDfLz{ /1rB:f؃;DL jQ!smv: =ߌweU`L*p_Hp#|5ٷL. |9Id2 ~ƬCF;c[swfvX+(v XFJ^SFH~zʅ'i E8gD[n8&3%kFT^wIv^v0;%9h{H,FZSe\潋rs&|A rKG{x#Kp0E=/U*T^Gs[RԡpgE/V˸lB\U26\E+%LN#բr}Hmi2joF_l1}d(gs&9n5Z d'#r2Q~L93 j GHjE7*e=6Ǚ9JiA8}Nḇ|9񇃟[x_! >“MS3oHU> `{mxsw$@u6Z"G'+X}~mH!md`7W\pH+Ŧ{j靳x 9=Z#A >k-D@MOwi3UZEiDn0h$d#r DBkw},.FJFG Ɉ#ĕÂ7wW)c$fO894!\{ qD[Y^k6Qʰi jD=_5 2w2<'YeqS E{SXl%'B%1w'<Hbv9k+e戆 ՌH,6Em"m.JM%8ZR**~y|O'kŕ* QoKzГg_0l0w+-?$3\߁%|JSAx{ۈSпwRT/Fg'aA|A$~l/dS OUQ@aB3XiSٰ/I16HZ߬/"f17~ý& QKGZMx<ˬڦse(\W>u9KJYwlUv4fM%<'k&yno q*q¼G1k*]g%ԸeLN0u+8Toc86{Gr3rBIM(>]}D}{x;sѫMIz;SH4JWPkJק8Lbkc3D˘.ZZ&N \J%SIW7V96rd`Is-5q9MCW]&IV8w-FS&Llmr)(b:K9\Jy/ +ʵ3H`69in`aaLfw'_LQ`LfLJ#0|v{y-FDG;׽L&v%YM& VsxWБH x 5$aQeZ WUnVҠ1z.m"do5\[\aS#\5ª[б?f@@@@x܌8 ٻ6$r#q. ]ܗ~\Ӓ-_/iF3C6#PaO]UU Js#9-c..1-'`&R1~9zWz,V7oѪbjWeQ[um\|L۝˟-ȥص,en3;_I$d0GiC K gpQy 'b0)si28omɇ)ӱ k ]uVV=AɁٯWbHNGFlQĜ` Y\:x@"YVYY 6M:Z h?t^鲠xUL}1.G i-fCGy_v̮—NӺf+UZ\==wKOO顑*ÏZ0BN~$ew|/ϴ̺70$zGV/]*4]ٌV})-cVpNShn!NO$Z.;6|"]ꊆ*?jFivߐR-PqRꅌ:#$/ `: cLj h^KFwlVX[u\2bj1^=)5ma,瀭2HKƊFIm]8L$#Ws+hZuA)ܱ.dDŽՂa lPd>`gL(bKECU؝oVUoAtK؆3Z܏m? CMhdC^HrҴ _H$}p⮪yoDq?~v%~?UF*[jY?O|C xI}jVIv<3>,Q h}en[O3ryav"B`m:SQȇ+-?@shD98+;Z;tw.,̻ȳcKkPvnDwt!bl1xQo+IX6o|=Fq2):2#{7^`{}!]6[(z)qca ɩ1#C eƄ&/y í0U) aOE)81t|VUB&o |2q`6s>d c?-am cGX>p |*q {dî0%ɶmˋzTB7|[k$Ʋa2QFB H1 v\u FQɀa+!A$y i(G'sϚ4qNNՓ;=_n^#M8`NZ]%3%˳W_SX-}Λx;kuȚ7_u*'ݣq9? A  ^.mМ;NEG:o[ zn3u2olsFwYXB9=C{J+`w` :֊׎Oz:[a&JZvq|w{yۯcpN筧\q)HÀ~N3&d)]~C҉ႹaYo߻yrU-(EwXڂHw]f~~r_zDrmn7~/Uᄢ9ſP>T5>._ŪdҊ}>-iy(J;vz**9Q4}[=5N%+oqJ43e|ɲh7[RTד:6%YȦ5G7Ul"j[Yٚ|ۗ%@E TTԕKon nH`K4%onEQob#MZH|-o*v8~ҷA֊A?[0ٜ!/MI/ՋU6$XNA5DhN>W=mNbVqY70F~w;HKԯ ;؊rҷT;7X6 2NLOzsj%q ٯm#lf}dkl `lmvkltY:Ev//z8P֙,"3P˦PYh0Q@c)(/Nv}t&\Ye6C>-lrowhr*E!x L171ɳcZAcP$x*{Za1DI@AmhVZ\&!$$(§+C1y7rT쪵] HoC7lr:})MEtN*s# LʐCʓD2ɅLXꀦ"'sxejݿSzƖ՚)^pۥ&󅖝*-*#)C[N&} ^9}.zd@0H{ڊ7x(&ok#uIKOvj8 (#A˔V ڢ $uV fif!(6'h`sa]0QgYQf$I~=g%߈_7{w~7.q9'J]T.q^1x 1Ѡd÷4Cr&*x֓1 },Fh-=S-5y͎ʵ k]u?ZPMUѢ oj|%_&6lNw#&IT%ngL =cq[`4k&0M( "d0,к4`GgAVbALZmLK5x6s(BTA4TQ,⿖C7Ԫ"ܵ(%0@Gs,zb*ڳ mSs GKrP>ow1e] (;z>wt#ir/0خ[D ,u]FdiQyA?F /reU;4VtZ#":IMJ LBv^JD*lI~-Fn-P9Ql9g/f)iRКdSxS@..)%u,{C|f2GV+l ݶjxP HrKL HXlN:J:re318Na)B,5dɡA1 DfZA`85 ݶ4<儂IfJNtP xžu6dR(0 ?kd1 -F9ZM1cAd l&K#f UF&6[E]MQ8b%DP*C7Dknj5!g@YqD`^kElV T%% ;!4($qY ( ڠ$l.e "9d(!@Nr.jmafW҆R -_{_'ڀ6iypۖ(#p\' $ -rN8&$/oSg)uqۄ}cV/C|9#k9 ?+mCrxoke7AsӫEؕDɛ.|$:˪qtӇ=>q]9қ4/[O. ?LZ=kPy]-no)]vq| ;!6N%Ilo8u"܌6B׫yw%t~|:~yd@rjz˲4b J1Tc"` ]4c5-;"Or,}:"ro+bNw3Ċ3`M^ scwWY|A$OKY~̀>{bt.L3+B@Pkt|MI`,c"q @wRz#)F9=Fcpp{/Ojo|~3*1ms:Ct.}<8=k VKv} %j Fvc9Hټ0J7W}A=ʷov?Obm*̶{Zѫvc'?1Ӱ@_bP~ p2xfAasf]$nLj0^7m6L0>8fӐ^wGT(y!v_[4,#*+؃,w?e}2dB#=yj|;M"3d*G!3ک Bjm Do3MtOId"3I%uC .!`ckcQMp'>^V* & g@Y9sc$ 10/_j !uDy:d <}n:Zi鏷 ӗY;]utovڙǝ:?d>m8?yj7Vÿ->??4w34э.HXb7f?1 /~X!CbZK.CidJ]Tȵ>~$?GX kP},a.b\6Y2=Y}(&TɋZ*i'MK[ g+Z퉉őD묡o "J0̽L欤7SVۣ[dP%܄zGYSܑ~AK2%KNs J#h;}ݍeeM[L\ʼn}J jLK`IƮMg :ef|ܒ \5'kLogۍ^|iۊŅ@;wEA?ěA?񙀁oJ ?b YzAKQZs8R)A?+&Ӱ|X !1A~z"6sEXZ7>667?ט~~Z QDvzw9?x毦}?|L]C$ww˱Q_Gd%UE{`dMT)Ѭ@.E>D#J& 7"-O筋ZCŒن /#{x,0ZXNl6A=>ҧٺ @t4p_*U>Wϋ<ѺF+Ԍ/Ng=Qۥ, +~ZAC9uŠX5TD`;%8؊6 0D䌨܁2r_/WN Pӌ> åKIaV:2mp$xk潖X"2W_L&XW:3-YI(S7Z$.8JKnigT60l-LqG֐0b< T߉TgV,9WjI#wZ,u^/thDTUg{XP;Z%NZo5α.RTg9]$RrUZiwWET=w6eJ4<@qɻ] !$[,,3v/ $2qwx@/8"@a-ݯr@/hRPpAO.;iPFF&J3ǭοYVB~:&UbG18u"t=/wӃ{VSC5&r Ϯ&poܯ;sF}o9s1ҥнn Ilp dz B4wniqe4bBt]jtkHw*iV\\|ӶԐ$Ne$c0\a86NڡuZ®Zzو>aBpvc{ \z."$ ĝjmG2te-pݚ2 Cjipb0 Gcp, *ϩz0X+"+-܋*e Ɗe8R/D#grrՄP¨"9Cy!ίXbF=n2E%7QqH(5s"nA5p\92dne3iZUIL0å 0NgFbXxe+AC5fvs1\J\Lor0 *`ˀF%),: xVru2]PeUx'I]ّ\.N;G_'<IYMCLa*p$3LNSu>ޣ 3;Nmr'bvT"hi m>P-x~ꯐ:S)>Uz >T2^pPTدi9mi 7-c\ 8;Lau}G@1h^7\y!NFZ_cnkh/G9;}^#ɝIq7ɎsϸfH1moX[PDaV2)1I02= tfpx|R!VbHO@:Y$t\Yנ-c<鰛2v+ 3 InWO{Vg sMBP]85r"fTd .r(?\M´كq`Y\jQlФ2zkxv:˙”VNw#c&Xp%vrv龖k0eG#X3g^J\SX~|Rr 0A.{PL L!@ Tᚴo20=  USۀP~\*SSDiRf&] ն-ʷmdi JLE[xѩ}a:즌O##WҧA;fBͻ辺2?.{g ϿR@lr;[8spNI{f&/~/ͷ[lv;"I Yh@dl6nv:E1mAAB_eQ$𿷀+|g>N6s_3 ]3\YNK\Wp}(ǫ_Ρɔ9Gׄt hҹ'ji*h wڿ+W1V=>}́cLWb'a!tى@.sr#O}iQz:RpQr]Z` =iΨ>g2wXAnDptÑLke0pxKZ8) p\?D0Vrќ).#è+r?Rc!m"eِ̓ a {NC$%X^G 1QE2γZDsL+J p9Ƌ VqM~fVjOyE#Mg %=ܡ=nӘZ^v*_.Ȍ:d'5%-H:xZmUh iT#7O @:&YzyMuĺt||ѣlQb -;PClۆcν$x93KM[Y戱`OsC~S-cW3N@'T7gTG%%ċfo)o fd;3'CA?_,/U*>6&үCU?Y*0ToQ̙R{ղ///'kܩ#֭xfw iZp +Y bF5VNa]L$];#Ҭkۓ(eVS5rj;ܥ;l kSHC>xھeW9 . 0IɆKmo"6O3t+#$&/sm{MUnݺq1/ࢳs)hk'Ɨ9 SR>^)ms -VMAiD (\~Ke{`u'yf:Dzmd >agؓ1^&ڞ)Rq`f]]*FHmQr(Y&kZlAcRԴLDayrlȫ"sLK+_1HD!>~Ay}s$5E0{D?8̐6fZm- hj\g@岅AzU5iJl6C^!QB+*3O۸jz_Մ!A̬:q1w)6*?l$bJn?Θ`A#(I & !bIH Ii޹&Qu'O'Qxo:uvf|:0`z #506;PhS|ڍ׎y KN .$@E G$oys .ԅ{ φ3F#8ͱ"$NTBu'7>Om4_~ ;T\"6VŻ::I,t4K{r/^k/ё12#V )'"* gLTk똑mYtTZn"q#+g#|W .o*eak!c!бx):tvWT dJBJ6f۬rRn^1\pI Mh`W,2ZZX&c#\q4)\PR++:GF},fUcA ZQ!HipA%ixOL᠐,8Nc\iSAr)G s0Fg"ZDUX16XV RWWq,nDūڪR1?s\aIzbcAx)IlN˹5k6v脓ya *QFcky͆iyƫk<{a~Z*Re^L+V&*jq$FE\Tgsԓ`B>+U~P-xhh)szEM!=-9u]R\UOX>?ݿO͗4CQ3H  %qʓ}x$pF8?ZPEaH5^LJ&׳ [uʲA0i\H3;Yv`|AG! x?>KW!YpK`~p [s<ʵvwchk҉o[Ggp~ ϧrm4t)|mx|ܶ?ݭW[oz[/_mM!ݳ^mrf_hp}@CzϷvn n?w_^Gswr휟s:9ox|w6px<_IHcAw~tezO薆|;fP 0/6|`I7f'WgMϋ/G/4Aٙ>.>]£{6kyGmsM|Yhy&ԗpl'uO9vi3aiaå'dL׼uQ'?{4˟{'/r>?Kky/vwOz?f/{i^֏a<>_Y 0H1t\ `I#0\)~sи0+$>η!=_~ OF`܏/T>BI<O r91?xT<:0[߄z^tg?}𓾴tI>=== =~NO}Ĕn ǟ.93N oӓrP#w mǟF7b \Snl }[N{YL?[|K& #8!YK89>Od]&ΞPA'9.'d\r:?g* 0@Y d) ;*uV(-4˥[6jRղ83_":!NH$Òbt٫G2jc$"3i}|J?Z_[_[_py`}7;hyYg)Z Z{;*@XˑKTf3?9C/or\0Zڔ]pGC)}vַkˮ/keQ<:V̺h S}='470WZA-KK ̠fQ:+UXlPr8DbH¦EJTZ("Tɠ[B'>^-jw6^\x#H[ZlzG7¯r,-Gr,-GɊQVaD?{<">r.{=Ki,!%\f= #᭔DnpVhB>0g# I-qɝ6!lܚ LJ$+(e:jhw j2V.-w>hOuZa 𭡑+X A V=%E쀵S_-jW;"%"ʍS,сG,\{%?ތ"K"K"K"+ *r)%Qcb4! F1W_ BEK$ʫCOBtBMקRS# _a*$8&ic.QCS€fS,ZQB%RQ!qu2$Z5BF'5){̈́pZ$;/^>f^Ϛ?͵n9u}-M>do67uk*"+p\_A,cLůb̑Vj.}vBשG }$ᜊ&:&yjU(f[1xD2DP#)*s@Uf,=E̵$CFEĽ89ט+M0wt y <NT(`6qk6!bP 1OC" htg0Z%FoU-R.dǒ¦6~WAu1Nk26j GwJtowH4 J9X$L4tYJdrMk3sUt\qQq!)O΅-RקbiV= :.`^J[$C?a)x#>)Z A1RZ/`D&ȣBHOdF"db̧0 s mWHdb+,5gAYl<2KiD)7*ၠ"4HhHHtT>x FtX-w *J /eԽil%`5xL֮YYڭ47[Ig(F,ub#yXYRd;bFiid5`@[wXb%(j-T$&3ȍ}2z zUE: p$ d fZe ŋc%^8aһ K^~9|.捓ޭI@ۛNߦ.x@d7fz5ޜm䥜O;}j (VKa\m, Q B{#&7^A:9:niU ɭ}yd^YA0j̲ b[Y Zͬm FH奔1H("1D4xD (q>m!VcEAنsVݸz d9Icr|Ď%1Gn 8ܡ\Gq՘S8Z;2pΙNTY"㞝S9TQHy,ixNs>V\֡e *Ε(;e1'R M'`n_L^꒍Er`zjh4p•Jq>Hiྜྷ @п1Z 1 qR!FكVCVOnΟZ$C#5z7DR6OVO->z=+.Vx=,m!!~%1tƆ mm@(|v᪄IוNCOS =}_:tu89\HDmZݺh7/V˜ M5Go\ХҙgVcv !9$Xft*c{'$ha؇`C§Ծ|0oC¡}оph_mOXۺ+ M:l2l:u[?[2mtoavTP| .5S`&A{kȚe1Gppeg5c4ibK8S"! v @hGh78s-mrrNuϜvc\g UӢ VvݠTuWݵ.޼%dT߿(J CZzձ0gKYrdižP_{H󐹐#&;-{K |w&_]QǏN{Un{`B~[#?u" ;*vkGEEᖂAq+WGqrxTFdtNڟ|.rӴRk<%M)pt9:9Nph0fBz>޾z|apB·ۖ ΅Q'r"uS+QڤDЀ@3eޓe{9̳JB8&u!(; M:C:dH9'swj9(U9\[wo˾gكaQ'ZL<3Ǽ[sYӁN"v'I hB92+g`V62GHs 1g-$'aهۈЋlxJwC}!>޷a=\ˬ o\= Ct0M%畗Wd]YQݭA}4 L-3 ^{VnZ=fT#+%߽`QY'ʊya ,KKZ*6Zva9.8]d(63܁>){i@p;܁w{w6[.ޝOk).3I| W̜ I TOq7RW&Dߛ+A%911j\܏\8\ Q`IȢNBn"xmOG$e 5] Sbg_ޙ80 p`\!J֞eZ \֬biQ? g5 @!\O.V:A;峏ʃ9σ! 7@gс]ħ;E2xϓꩀJ1 g=Y@'k) ~Y`S%3cO2@,` C+|Œҕ!TީTd- NØ:\Gs״]Ix7JOF?}>MlTyq뗕'1q(AE/gg)g)g)fL Ó*4"i~g/B9aTpՑ$t NJUF3]SB7ʇu7WJF#“#dYmRs4ynZV,u%#zj\cW/_r[(7}>3_gUu釽=ƍ]eSFV'U=N?ʄo_MlĖhK^cV%DghIh6vQzz GC< &!sxԇwKV1&T *ZZ\1Ud A f2 kAkd8: XKtagS"_CQjP[F|TY#CvGHO(R) U6TEJy]&{y8dp c&ϕ@GɄ %j{3"gnBb)<޿l"ah3PrL#\Yyty *LU`) Cc3b 26a4Ѻ@ QLU $UL$I ֐=͆0.am/ȅ:D5W/vWUBB=Wల,k}ˡeЮEnA;w<R9zKxдF3&଴F , d9řo9ʻø9t tЪ/54=2:#Hr鎉ʁije@DԖK40 AɨCi*(Lx}"s- hۇq#fIYY.n!QTrU*. !ZFlkMO2- ͩD3gJ<17%6I{7n cRB~wrv1Q̕IHZDڀh'IDGeYDfpUbQ:M"]$%K?1bENa#Tڲ{@H{iTsZ|);y9rt>^R3קźhۣ[<;\߫iͺ ^:AQX%u7:r|iGid| cTFrF'W*1] k~Fu/_N-Zq i9kUA(Ϳ$XkuI=+`ecZTf&I(1:fYjl%(`[$##o̪ \S[vI LHn77Iu?&;]&4p#o22n+m$EHFAWb`#̲l-%S>RTJiה] T,G/yɀWuIV0LMnccCd*g[õv[d;;&JMn=v~дr:$꒥EY* ~tAJ.Pi=gG D+3\SkB Gx v"`?,B%m$LqbGE4 5h<ljJJ]wײ  {vɑ2Nu9ݐdcĠ' $*oI9d >VY1fcX`!!1lG:1b;vA2(,Ig5XotrêRd\&m{m JbS˚ײ[tn*{8cR36g#kaJiPk1Z*a@mG%ӆUQZu\*5Jo/1Z.պ]yG8s|]k2@_rvqnqh_>@_o]xt72u|0ʡSCBpa=/B󁷝kZBFKKvnm4U&l$)؞HNrۧ%_Ur)*+ׄTyLf&5 #Z*$W YSH|\YL hTSspXjLCCF(t[C9;ELL ^U6·TY#LÐS#37$yub9Pv?JW;v=rު@^Ih=D{GvLHeCO<œO /]kP{B$HFb>b2hbJE$4ylVnPnx=&j_KJ+k *FSqYo {<%} ȌQNPQ?#!)^Œ%=FT?o=R]c(b6boOjUJ>4\HQeIi0Ԭ3&U|a`Z@ d5? w' ҷذ.JR Ӊ%Lp׻^yuجm3*ۉVIZ~}0g݈,wXxFy9/XXsw:Qk$B,kd*"MǧslӲ tM!dѩ0p|kM/TF3YCNsw퐔:Jw:jJxґHcw7cM8lr )8\3pRU;$ԓkA{_؁;FfH`sH&"l|g¼#k&"lIVocxSiRSkUS-I!L`iWW) X{S5Q:\IrqKAxը*R#-{E%)1u9c0c jb&JRLVh3v[[@&+cUKVx 0N)8B=Ob/iMo. kǀZ3#@^ʬa@_ 'lqSfv2ú6y[fퟴ}OdKeyV?82{Eh¶Vrd%79Rc|a`"HM,b-3cJ25¤Zc &`YrV,AL\uPMKҍ3Ybc@H:[g ")3{Q z&1&{Nءhр 9O̍.6.d&uj(cr86RYm.h4pֻu d<͔5McLБSXoTvgLJ/Q[r zH疸wˡ['(Ϙe=yD枬F4s+}`09nO/:U}S^Bb5mDA8P9Cot,4,&'_S F-kS(\E1y-Td"8}Dd[ba}1ژWMe4{VTAU)їi] QF%R t6Ag%Ph(%vs40ֹ)"IZwy}+Y[fXhITr3=P YK/o,^6? -rd!]}M_[Z (pI2PA6-o; [awPje SMֺd5Vg2hPAncضR>K")}p>F zIыC8:HP Y{?XS[y_۳/jH!ةNFYӓ03AE@*N%{=n-εXOD>*Ν}|ye5 d(}7Y2n4YH癟jr&7#Ʈvu#&ULls?\/VYB.Z?k(D}Tt6*TP9h%i܋^Rn*\sXSE^enB`\49lIqn.ҵJKt9jR^Y=051x&_z{쥲+ڍIHF=J䫤 r%o(e~kAfݛu=sc4\u~qk Bs`smu'SjWRY5 ).S@&ɐԴKn r5Y HZ!6n8\C>5ջ$'`; h˼ ozmےNх࿳ Cf ɕק__y̖g|B^;K{=|)Tdԫf4<,ē|_ 7sTmtC oq]Oo)~hӗWqϾenCc=M* X}Yy=Wtk{n]Rjg\nTlCPZq{V:?fTfd_Ynsy׻MBx'v]j6:?f$Y_awU%53jjwPYR}bO**U8p"Vd*ܭ.X7mM1qšLA(|*Qvwg]]1^"g6Adz:S k2ۧEقП2;XziOx6Ay #Y0oC{}yڥXבtKp&lGOXPMt:[|H'`>m nlo~6Ą谡?jVj&pM@9%{0=1#NC+.dI.v1Ox;YaOA^/lyƗ/dzLԗ# ~N^R+>J<"Mr=WoK|M4Fz,si$E2]S?eqSl}KͿC Pr%7 +_R)]gՊ*d^.POaI/lyƗ/_q⡗ht alptY=):BxWDy\!ZR_yJ/ry=a\jB[6/T:=Kx`,WTw; ݉dN$Cw[-+TΗs2zKbFڥ9kX\Cٻ֑W ?Iдk_ !n A Ӂh-K(b*ɶ(bETc(,@.ρ-;/~>$Da7&1*֟zQng4gdϏ1/2sZZ# {"ϰmMY_CBTb_&A1Úʪ ZP- @l#-{07Ǡ6МUrHoms@GA5?ܖƾQ_dYW==HuZ9A<!MD| S8| o@FscwpFt 凎PDwa[.ʳsHcq`C|gDń $F&*aʠ\r/,`)4 E EcAfGۇ!JWe K /i|#IFL+ZT-F?e)Ȍ!W,aOr ELx@F Z]uY"Lt]8XQlHIQ”sg-lcg#S2FDUJ?rLǮمg`›:F cpVN1w=߃^6ESl೵-k:gTJH:)t?zgL @`YyGfi }g wL2 _أ4~%p:h_?>~d T>O/ypA z0.T2 X?ºGpބx)_ .c,0$Vct^՝W5 #ˢ.+`q#P.([FV\+a*HI!ƒ {*ujL|~m| 'g2#HFDo9L2W]I<(M1a3r1ds4 hE!3`-_~)t87B75sy^sUn!櫺_{`CoAd^iGTAMy+t] U,ȅ)aIY/n.YY< Wn ɾ\ gm1.F7q$۶KVٚ`8vl8%ǎX\ϗe;'WSTkz0՚՚=YS \23C'8p.2H319e(9-)9J +랬mԪ'kۇlSJńi|Tr"cE0'Jp И&@Y$E̷1^gYfLSK"<ƒ@ yF`)I8fq&P_Y"9ιfg9haD0Ju/J$-*&IJ}Et) Rbd:/բtE| Y Ll}n8 a("Y3DEH%yD2 #Q P$I<ΨBdp,|r@JXLG-87H̀R^zOxQ*% +m*F%V9$9Hg|텵0-!>Ya@ڬͥRk OmB(Һ\wmr19q/gނ129|;aMXs୻b8[r.a]bOpF4A.}{c¾Ԍ @)YGFo0lËJ+a9ZH2HX0J`pT s GٸweJlWhZ1dxOCt9y:–deIFXfK3dŘ{9ô1{S =ʮW5$8sB> 0۝`#bC !z=RDv ŠëcLTyL$URr"e3wȵREIj*b6W%'H _)Ke+Jj1\Hߏxi*u$t-mZ<u$PP%#%\`.b e$=(Vb8߆Jw3=~aXq5H`Οߒy>Vtɐ'N;FM VG3@CTGDи._yh$Xc$v0HM&[.[qtV".**Q-1Lkg/{qѻ#j>>kV39@ys_و_ '+w'd KvDj)p]2xffu*TPU@:CH3AZBs:yp+0rB]sa $`V"ʽa,  jF>!xf!5ԢOtݡ`&!@;Jj 6Q% 0BGX;W׿I.mgk_/2A/妧`] 0a)v3X :?#lɝ_c[o;5?sܣJiik#}zЛfejv2[*٣%N\T,V~?XftW+\WM>,<{qǓB]ߗ뿨fi|\'OO]]exzL_?qg$&4PhG@!,K Ԋ$3`.x{zۋFm"@z#'` "/$a IA+Z@H"}d%<TmA`,692\EUM F֝؊&,ЖL9V/S[p70)[%q<0Na_j eG#]Xqy"/q *M-*MR.ӲrZ6~<^}ӉtFҖJ~n"PvS^+/`/.XRO -w!\ّU* Oe0<G A% ȋѫvdx,m ?sD\ xypljV %Pj?!'[+zC6`sk#I[,6g$ԖINpFeNFɉ53s97O7jK+ '4\cl^rgNR!ߩ/!'x<|?f:vo둴Icދ;OX.Umr즅:k2O{7:㱓RUc^}EE}$^<|SWi|8%Z*.0|y=BU$(zԌq;?-8}䎝f1z>ϲom OG'Lg;=w~%Ҝhhs&g8SO#qG}d֠sv q(?>L넇9FkO$;/Aq`bg75h|)s[{Lbv~v~,,<i)xѺ&ÿrXQfvt̸^çzQ g]*]Wꨕ A`p?I*D2Y'5D^27؝:HЃsPKq jIyI.78 {?8 Я&4Baܯk^` kWL)wiӒEV4sl 8\ .?{8_iTׯKviSh j@daNok jQuTUPJB3ᅋb^*z@o)U, iU# $)BJ ;亓\ߝ$ם亓\k\J)PYWlffAriVWT Hei\kjp> ɥ`k)y~\WW \n ݲ)tQv%Jw<$םNr<NR.UJ/ĕr E\.=PPljw)SZI6npEYdk]u'Dw'~:[]wkKU!TAOs\\ZfuURU"[_yH_eHjUH%_Zjsߦ!;)w<ɤw<<,@x48,fnt^*~@HP[/)rj/+.e%1w_҇wTJB,|9T \R!֚J$RJKψQo֖_,59nӒ+"a톾jwP?.0n7]B5)>7g+8̖6jF[%m}gdm7J $wP*׶թo Ztʺ[ }s-u6oy^]oQ!o6 8_tЊ*j/I߸,vF'6R8Yu[Dٽ U|j{{;QoZߡrN=J0",B*N%Ý yk{ |e3=+኱VDdSWL'O&O&'I2r2I:֧Y0|PZ {<\$]JNӐ8`.KRPASJD 2JU <m\4ǡfѺDChcR6SXdQaH^O07>1+/W")]ʎ>⹕}q2!XWeWsfى>F p(Qa{UZ "fbpS!>'_l~ӡE^SMXO7F͠ڸ]LƑ(9{ёL,2N-rf#kɄ Z}BWF >cAݢ;>]TMEtp@J;]1ʲ>Tgf:Je%IJe[jD+2`Lb!A-`r`$>l KW0EYIS%񕍔Aȉdb d '3t9*ȭcc:PAك4zi#Y( X$t\HN;A&]F2{ms-c oF ?ƽG*EV*YL4 8q 2K2pN4Z>+3b,JW|JFPܦV> [R>W2p$zSfc}cx PJ<'`K:7I#B<hk9‹f!Sf>w)G~fЀM ڨ[rG;*Xʅh$BP**BZhJCJLdB%aD띧)@2НtABXe=^=a왆 jPjMLADsJW0[+GLۂ9!y)Y(2n~c 4Ff E,Vf\ˋk% l&PwfuDE$ H^-AE-k%(,$ƶ e8Ze.C2:!M)f5nK) J{ItZcjd`|?j&PH=NЂ=2k !?dTR4tA5n˞)CPA}Sk+e7=iwq*;'40rhKH;'¦I 4%24eYcJ\֘-{&?h&Dg#+A 5}''~zࢩڢpW2:uw!ݛV]E.h`(=hZ7#`̖G~0(#dp6;YAWɵu32N0c-{y5\GUg֦ZJ</v`pV)6ZjyPL%3}9l#3^w\}v`ozrk- ;>>\!0`>g:~Tt6BO ͳHʼn]Geý~uCV}1li\B]"Me479lws*LyrWO(HΉ%};Gb'; EBm O43TwIIgB&)8Q뫶ZEljdZul= 3+8q sB6kMK@^kKeO} }Zi vs[{0J˓PNAb3AByN)}:@<\yj%sdzu{HB#8|$Q#5:S(<:7_%ի6U'B8s?*=[;QX eCy CЩUXAiWX[ "MTIabfq-aAFqpz=ER;j|v=XEa\sVCk@jPvc$̇~!Ԅ,ZGqx(!:VYN}]Q!9,D^C{SFڋwZh \߇wh#kـ\"Zt3,Hj0Jj^$eo)3)Az|1ɞWk6| 3$hcD>/Y3M2R~HŠ U] R%Eq,pBLl6؀QU3Vxc+U*8Gnp^OД%3?pB}R7r"~# !)?vY-1Y`62LТ”1i;*3$d&CSbO.h5B+H}՝cD3n 6hdk5nG$$XS\qxC9ύgf vc7i=#TII9j(Zb]}qkW@}D/T3_|Fp7[RCzO,ƔWf|"H=9@e}yarەkyD+|$ASbRT&T+R-bRpH(z"l"L+茣Ջ IQv\%!ܖ()fԞav{n髂gSE08 7t})((U,^WvዾlQbSF$ZK=Zt7z֠7=*a+n݃5v:Bpp~p'9]\[5nari;E# ~!GKz;q)i+ZlNYZݝä8bRB)2V:m8 ʜ^,s2+pu/^ u[+`1@ɖloCۜ& GpPRPE{]] aHGG!M%n+>SzXԋ4@Z`*3ȟ% x҄krkYnkUb)PZS)٣S-N0Z$ِЂj)>fW7lNQԭn"^+ qtdL((BP;wmuR芁|#xۯ/,ŹOVԖUBgbIۻ1ژ.]b_q@.0.BiZ`*HهyрPh~ -*&Ѹ3o ۵2Ȣ:HizŀbH*4뀭rhbd!mD'] :KB%k}dD,Dtgb#59$(WC:cxz;_\2Fnŏ6]i!_ PY/5 +{t_&h 3:IhD*9WS4^e'ߛ) I/WnmLg.nM3C8_90aMtS0tRDWYKAbW9.m%Fa2lj8FhߝvD`͛xO; k?!Lɵsw"$fۢ05()!mc GW|)џhm>uǷt4&4؈݅sޚR?[Q}NEZero!U ٜ#wQzxn;U @2A^afl7K^{Url|~6eη-Q0ػ[H!8QnԈwmbTAՄB!*w2ڥ(`F|v|`_}q&B$mtqs:?d童C^TS 6)oqXNk nI7Br>ِӃ+bWg <9n0c% 9Y ({ə2ɝ|e. O<\y2Jg:svF,=9kaO3ݮ憒$sA!<]"Me479QؓȜdh*je&5$ʊ\REs UsYJrCp:{tl(; T+V2bL Jv7jM: eN_3E5%Fu0DeWs &4/)a;9kA'( *5Y}RH  icJTG}鵱Ϥc`*hp,4;{`t'xأ O QUH.yCY TQ8=%nl[ 4xvB':Y)> ܫ1uF1 >GMɣ|3f_4>] Q'by|(ǓTv)GR  aܠ[MH!EJhHCZq&=QеNOI!'TWBH3%#w!dPr搀~00H1Jʘ=u0(GP&;I9 T=[7Y ]݃ JhxABE¨(iQvQASWK{oTts:IEW 9BǾ]r`0u}.򫋥ipv2][LD8Z\Ħv|gV]Yi7&λM>`=zkΧ/fgEvvXޝSwZ}ٻףE>m~]x}dw!hɭO[}$sX<`^2)ߟPb˖9<%KaLJB (4su޼iǙx6- D;VMO7%$wT拉8[:tS=Haz)LkxIC? bw#wDŽ(k1W.]hG@'&R_q.'w䡺Kdv9's-X-2G>3pB{y.@2Ǻ.%5肺ȴEx /F4Aqn%W1RhV%a ¼1yOy=d" C/{oi%rdKx"Py2;ŧZ|vZN)HT˒ȿߍ~Qn =F@<4pPbhq6^(YP/0Esz\{QIA[C9ncWz9rfky=Np_h߿UiU2dXdNH$ENd! G 1),UsRnJMIq))7]O gR] K؄T)!3cڜd b0%F<5zmJgsmX޼W|wXlN}fx}+7=ETj܎6/fxzY$\ۇϺ#6akc۷%j{7Lvido?~Xǭg|{v獻^w[~twGm?Wo_hro%(l#S^Vᜆ!I _1k9^sKӎ22x 2<= 2o}}kwՙᱰ}UpѳA{Z.U011Ȑ{s@j =8cȍ'+W?&w9H$/?}L66D,V^Jlp뫿|zu{yώf?ƿ7hd #,Ke|LJђ̙=KqSq˟Hm6x:EYA8Y4CWR]3`ֻ7ϐ *ϐO^u!kbΎG-BH63)$zIfT阡꽗Pj끲 J%-!(pN4[Y^+D0ZWЖg4F!d"Kpa#ڣ!ym.H{ Atᤝ5(ȸl S- p6oរ>;5 ',rۑ5sΌ `ciWѮ5&n\{%zkKj{M b;m|+e'&x,(^k!Jh$.iI. HDdFdڸ`ќl԰j2W~ק=7|eIF3!{oG|57|6 87|q<ҷl-SՖvcށFB> r0k=sLj92#I䌒a54^~ Nf$ICڄ|K0\b4YeedF&tN>$C@-XcmDSL+KQ^* j3i+96Yu4)p/p@ ɔTmKMXO]NXBmA ,Y@s!jOVY$ UozY+LQ^ȹWp Z Uq< S2覲J1\L1:7.c e|0|`N@`owPA%dO̶wO^氦&~}$'߼IdFA|ӹxUbV{>~/7 UѺϷ\dYH|WH# `"׃:) +3k/KV$В֮eŘ_N.@1dP̖W{K/ \uz!ݮ:vuŠ-^Qr7rcg]Č0Jsfu"f6..~V56qIQ쉈A_"F&Z~fDj^*=dzTttLnt[I5]Ԟ`iZm[-`a׼ȆAHafzM"ġ 4`޲ a#Ԗ_L<~ y Cp#NU.4{O:3~i߻4y|=|qӄTEvrPs@D~aƻ4y Sj:Fz,w$7!2 D$Y^ "z42$ºPzjðlX-pz@qeKz)eܳj"B܂xV#l:M;Qj0m`?rPrM>OM]}\e}ALp]u1y!!8JurIvګњ 6 +ܺT( >7A0_nh5}Nl$*,'b&} ?5BpBup D{4} |vI =~"I"jO[6?_&MH;2Sn4jj͕fY,nPjGkIgwiM5݉(~Q8f9uH :B}N=8gȻ:.w׼߯zy7B. mnBpPGz0pZ KzSK-1Ĭ\^T(j EǫjnF/~RݽaۻKu9~/yw iE ID2*jS%i 'g:sL$VF Ryѷ(n. -o(U0V t^qѩO F6b 9 ˢ-n5Hſb]yY C`9k|)/s< Ft 2w$^w6%Z;HYq)5ƚV-` %dZ)l݃pr^޼Isfy9VORE) ZEqݹ&254f=ͻ9$̚UkGk"QSJ22f&Kݚ@&ϱw.~\K%a1g`V'Bf$/°&=2Q3Gai`0iiX[piҷt5ՄEJdQ GN>:m?Xj2;ALzƯ,U3ucRuy ݌$Pk&&uq)J"O2_HCeʥ햓E)Go%׸u*7A>rG͹B; /2{şZ7z7F\6 va53 y%&|]6_3?$X5%zUXᇱX6z?BvܿcFةgN !C1TPlF/N˘*J|kQQw"st+T ooռсnx#Vc.AfYm]%I>C!RbBwoHMsU7 yFȹ3*y"BI}B 8)cG ߎ%˺.!E#jcYHew &a]a- 5 I,,HQZo0 nIX5I]zg{_}H=ќ(:ыKxJ\]jnF/.p/c*Jk?ZFZ !o[Y e}ìL/V N_KkwY1i lV x/'#Vtɠ&z}41 1OOҺ7!%Y 7Kyw7g,3mBN}r ^eilz$m!t 6I\UQ"%yI{ƓD:ͻkW|%f`A$pZ% Jٔ]>A0IIw״9c;Dzni=G:4];)TLQI2; (A3(A@H,W0es||y&r;!iٻ6r,Wi`gK曼fFY,v O ,6t2KdXbUIn8]b]C^^J5>Nç’~#^bUAIV<ޑayoYMOFgq>-FYΞg`z^ b wynћ1}? wИNGW۪#vT`pCThu?&WCvh]nHZCc4^Mt5 /|MA-¸XJ^h\!@K(_?5Zs2*ޜ3fDi'#J2%V8AAJbcC/RQKPl޷UbHDZ )᪖Rx4|OE= =ɇ}Y6  -6z̠2]ݳ٦_˖Q}jxaj' _&' ?Hf-&=,t.F[BRԷ cE0}äOr^aya"M7hP_`B7=3p89loǍe܈_ƍe܈_7⮈uiKDKqgNSYDNcothI\afp1U]Gs/0Ă=LbLW-?ͪwfnu6/RJ=Ec yLtd-᭬=w_| OǷҴu,-{WB~d 6~R/:>O`>SXI{=jLG)%+Q7ǧ)XO?wߵ̆ jHz\F`D V gJ`Y0[`8Q 9b,G[iPoSƻ@v/rƵZrdޠ3'쥱(h'/9`!$񖔖xcLRֹ:g1-J.%tˇS%Eg32DR01o)s rٷ?MXVd߲`Pn_N;EŦfuŨC΂`=vj ]d>}_U_֤|䪞 M"*'c_ՊJ P+ (X%`nTR0{P&iews!D9\0-M\S0A䤣`$ڗ󠷭ӌi)q:q! YA{zBX3VB8Jw؟Ń;j׳g_fd>v9'j~NfR|u]`H##pZL8/侌>YEڔ^Zg 4^Iz5!bɰ0PkV\r_>>{u|"E O-̹+w Rh.ի|!ѐ jIjsbX"-KڇSkzJKV(JHM"@ DoMjA +l WGND5<3 ^p]2XĹu/Wn;5ܗV@0(h]@=/0;g80)J&"L #EIuFyg :dRp5d 燮üɀn )㼷6]h9 ;8B%{/>p&N3PIo_%&*_$G|͟bՀQ%M랱.,w9z*3RyyEeeqpIÇy~>u}x[cAHx0QӪw𻿏n]x3(.}a2 &B~|R-m `#Ӄ۷LB<쯋ӕQ9\* Pja谐HJz6[a<Mn5WpПL2Ҝ 9 R|vMNX,M&3: E-h镧%HJK_(ԣ[6T( i-VNPaySJ7Arkd%*CåQ+0W$NA 9t`'X IhZB[]Bӧa>JĝfԻ:.wٵ]714ib1{;g WV&נe0:^ު+!qE0ׂq\]nr!'7}źdIX}f㟋O>|D)a[Th:).8ӿ9v_ ~z D) +|8:-qƸkKBK!c?T|$ŚyYf4ZUCX*Fb׎PR@^UE߿u\*t( mCUE"Lٟv7EVh(Q(q1TQm|9;⤼lKcS!f\}=,:-q( ڀGW=)&]$]MgEuL#c$W7TpgKؚt4c-@p.#zZQB2U TiʅC.NPӆ&EOA^KqVn88mwc_hnH ֭U{ʚ=$rLhYF$'8͆4Wӎ!Eul͉!ĮMenuD"eq@^yYL*r{`I41+$9Ғ9^z"9SD2E|Sѵ8 ịn>E6C;XրuܔD!JbFZnؖr83ĕfcZfkr!69)='MWn|"BI5eAB7`f? 9IJАJ2.P-<4' EYr TJЊcpNqAb` Oc˒1j:ŜØnB J;c^GI>WB%cz0TCWVq<UId\n4 roPTWY`01&a`1d3Y˙bWq/@Ըdl { T9GXmf7&tM_Ĉ7q~yu,)E CFD ċ`J!03ڜdCc!"&LOy WR ) +.bcvM9D1v`1!Uו$tؑ4rZZ)D2f4`)RP/ꅕ ^ʴ0 8%_SקP+;KݓccS\s X =G#N!@a%Xt|5+yE.="yr_R~@ތAz*wS)[E8##Bz| ZBl_@ū$JZ^UMaH?3z RaHbd9cFc1ť%ј. d6AAtQ!7;J|ruTʹ G1 hCR*&YnNatA}dH#cX)*"@_N l{(VfH`w8BhmH o91qתlB -kUN.'6[459oJUp$Pӏk_jakQا?{Fr X{N J\KBR阮iHÞm&@[﫪.^~y$ـ*@R>jn$NS N#䔎0 Vh^(baEFW)ĭR"#^[tO7%#6Sc\? Jz[>Yfm 2g2XY .{ȐBQcܠ[rx[ݑ`*Ў:g6IDcura *8!,#ټ04$#:2sm_TnR9/_ԥ[7?,Вݖoqx,{Eeo?5v\J8n9l,)-t^aAaISOdz?znS遍s3S[F})Z4kl$ʌ1Yc\Ȍq*E|Vlg4R6xpy*@ 9Qx%S]2JS98刷Q*q!k9Au7U %](&owq_F?L}E`,] Q+|c=@+t|vg<\7Y@OaQEWC7Ig#S0+[&-`Y÷\9Df-9XAI0UG T.eh w5ǂs9spvE9+ݻ $^H22wf <8[Rʿ;-(%Jִ5-5F G跞>DⲧGk!ƽ >fΔPǼzQOJzfAXb-5d0i_5+D@.h|DpbdRP⢞P ( m)ȱ=FبHHcCB(1~KxP"2ɬ7 )IޣuI19-2̻oWc ^*7`,??X\=DVe4-iaE~[Qjwh0 fWþ]xkad"tO d,ٺ*Kʒj S\! ,ߎQ-Wccl`QZ` |w>Ar C >̮e[q\7f_웃M|f0?\5Hw_ܨG/";DRA*%F7eLT^ʫ2QyUMTVM :l&2 wܖW;1mh DrR,#N6eXE=ҏe̢^Yԫ2zU͢VC8gXSaƜ7?b Esm >ҏ܏T 0RܔڥF*l^IAL !\aqJ McZ.7ʘ;.ŭb Fɑjԟ߾qO*/O4zD6*!]p ZZ ;SO bd oARDY#6_"$; *h"aZ$6Kb4!TFh*Qmx[URSL07h B15ĖhyVNX,i5Ll@VkGi5/ݏ,  5:)U]JBof㙥H*/,2"j  N<y!=B+ GBʲ qVP7b){,q >;QEŘakGƒ@Ѭ 7m"q]ZNX 0<4@{ao,ܤֲ{KX)y@CDt8jX "L5}ʘ n<T B!{SSYrd3)#wkQ85H%/`0s샆Pf`e(MKAIQ k$OqRA~W2ӈ390 V{׋i&q 0N;3ƅWo?Eث7xG7Not-RFDXlBRpnHұr9fxownӥvmӎK_x퀩]]1t)\oB?>zpMYgz鴻_59G=t'z8UjSہFSQͧ~,[R!w?(O;'=?Pd{6ͣ)L+ZPp)d(\Pp)RvNO%ΟܥnF;¼;F&3:8g0LɌΟn1U=<ϙiŏ FNet>0*wnӨɌΟ<<ec98wy#aNwZgߖbrp UH(:"$'[MӴnIpR #i>*$ycEkp4%e.:%X;sݲK.V&ִZ 1:Z5݈&-Ӯy(mWO|K_=q :2_dB6UIZ5&D~-4]pt[(V|"e Q(L5ī&hdr肷jdјΐ$$Y43`S<&@>)V,ve7d:RLs^ (SJhG=y$h0 Pq#PN\31_ 9b [3rڙ~(E?r5vD襐L.KVZҨYjQ t{Is]X"RGOae%3e2 ̑p)̖ExaN`p) MȃLvh,nٍr@eqpKtA4)C:70}}aOWxS doWK YYf^yo0L>CYw%5N~%L?Ŗo'4W^gle꼜Zx\۶k}֪f9 aРf݆f=˷Xw]84zeZR`YCvϟuIuuؒLkv.L;gq0gŨmF3'dnzvIeR}Ieۨ4]DhE-:>|l;'s0`QBz6/e2r\u}G)9BLɯ qv807#NO<-~: NZZ(jki/z4S*8 >*&R)t* 2߂:j|NjZGheEW—~ 3LQNXNNXI JR:)h%aQBS|::Mu+pqu?ǻ_>\0w) ar 2 kVr6f*Ps+K!QպNQKG[o@xhrg&*ςsƅ,a0;vm3Όܑ3_x[Bq/+u i @Y<߰f@؃\UXyz wԚ7%0;3d2rGeE(6V)q&QGh]:'Qdʵ΅xIҒ3+RDdRtT6UYpۊYQuL/J*&(W9g5't*B*WCFKE6r(NI;URsQ6Pn7Psg#"]5,V$w/$Ϛ܏Tm`왰$|iHQsMtj7.qfle \ʂ5m$$d.e)o63aЙnī{ w356L{  aRnoWΏ0q S'HCT%ybJ˗.+waU[<ݳJn4b5 Rqy;bdd'O ƺ;j֏soT C[qDAO>魧OF%өC&'И@kH?Һ-W.m^}]<{0 8{y*`!N1f{/W/?LJ͵g!Nnsoۏ3\ P6㕓LEe\'&3 6 ap?~RŃKL3y}WO*,j_-..U%WWW0 3r@KuWN-LٻFn$õC)3nC>- 6cؒ#iIߏ-rV7nfK_bUq cqMgv& ?m!׷\r}[d8r92 T1a#0+#Ary,& ~-0r䶾۱*ل1 ۄG(fŖ:iǗx*E;mF)y|W$ '40@ӱ^pfmSy ?e$W5Bx `\N@ŽXxQ?ͽVW?L4Bq< cӨC(%h,)wTӸ0.= izDrx:16:CmnTL:#d(I}}pSEzUJj*OJ|D`nŸ{iLX=/X*"fh Rra)-;]2(H .POFҝ+Ne?!~D);5}+Q\|3lNɆ#{R0TDm^.w`xGeL;uB%HԤZ.Y}%O09̍[+vOځgRv?8PޖGbz0X&pֲSfTp$Z6, njTo]`ٖ^L,Pg'x~)nɀK- A=bN[‘-ewi,eH3ͼ6u6<IAMp 5wلִaRAp+$sg]#8AQq;1'!.Plx2c<ʸF)}:s8)ǩٜ zŨw^ɭ}[/PIq}: Yb[(%1{5!vwo¹l+bX?Vy{ xeQcT 0{]:cz(7zfVVdĆ?1&>T(H-V,_(]p5ޥΎgtAsqQX,M*^ǚ%+M3l^=T 7~o-C1Wp)xA:~KȺɲG1/qLAHyD]8Vk,f3x 3$"-^BA!!reYTleV6ʿ!Hhҹw " & K•3<ȋ,7d#s"[̧mC E88AdSшYgwq1n]{ q 9n[wPufVaќLi2}nhd86.oվ%c |&s@^"{'t#5{}p*~T*"؉* N.JE;VRzA\Oѥ-@THjލmc [,{]lH\XRHW ijzލ8;xO_C4W  10:yBRӴA)׌JEʊE->/$*7Nd}|F: -_gi]/6W>B 刿m.5h=khz|uvei!}]dנZIro_ #JӒD$(-{q{H7R/^@`a`E(X9m$ =}.)hӊV ԶocXhb3 pJ)qfNLFߍ}hVISXqzW'0/v) 2@n} v<[BV&t.`2O|0 (ÁƎ[1kbXvr¦)vgtu/?Mm>uwiްxO "ĚQJ֌k("T) }uGQG˶;zPZSIy5iV/6ݎ=;e_j(W!]ȶ]KP\n7VG^&QA^q'j#ږEiJQbs@={v?(GqR<aRKX(|%9towH3|aLY9pB_&y&OF+5*pE/,#.O֩/ 8 L_PR}o8N_ۦWX5]Lw/+ uwo_P~Q\q}qϗ?àMdHPiZce8$57~0PLLPZ>!8_HJ;Ag7ߍss-0x3s7pOɃ}, qMgf Pg[R V2owqBp=ƙ8WaGٜ8ɕ@F fքSirH>xOsA >N&O~8A0DfzԠnfzm:-(%;plΕ< ÏM\Rm4ʳ|0c'8WҠ (,Y_0~rKGS4*_M0y '+<1'E]#>\ZT1hg":˥K\YIt)Ad@%j" 4EAŜ$rDu5]:g(w<Pzk9XANO>G @ Ϸ,љöH?mw܇2$9+ H~Qd/K:z7=$%_bg2 [tW]_=˱KާaTڛO73|'kh4BUܢSG8DT<dRQuRI8BY0gi5T\ld-ZaZ  kST辆N{+Ig>ƠA- ^m%ӖM{iȇMG? ߻F}O =0iwm%Oz^Ga2Q:L3 rwLE[V!9[~H/gM>rՍ֨mڛ9s/waZRdn~Xȅh-*]Cݘ݄f [] rLnUfIO۳[nMXȅMLW!QBVGZn*Z'R!opƃ=-TGW֢( Ֆɵ.6}EuG|54N;B >0?\utMp8ҵ9gpP(G8l ) v$t#)Tf&xDRB#Lcnp9o56J1J8I"^yaI^:Y0oK02\Ȩ IuZ͗JFkXsnֲy*&&"0p;66rꈩّqDqY?w;2Ds OTz1#{ V7'B5T\|P 1579˴ŹL6'֮ lK"7۸s^m|EEEEs6u#sAC@?XrV) HT3ƸH8 vZd?Iσp5:w7ЃfGA@l7E7`,c6.} o]IK,xYn NDĢ0i֐fš˕QAH89miDke<& S%e7? \ØrJRY#1B1n1QH- ^j(CƬY>Diaxp j* s^mqRB< & [J].`Aapc0Q Tr9-zĥZT2}U =(?{"E+\RL|D4BA kMtiCnW\ \N邬K 4jK7 K"cHq $5A(buƚ1 ݤPA[\Q"܂$nvP#aXU;]ptTj5U+ _@atZp X^hA| (4Z)XMhIaxz#$Az/=Ȼ!TʄMΨ1jn*{BaB3Kxzixp~Ylt-..ni뇿|: J?/=08~WXt@o|}p(ޏ}x2]ɧْow dťό8}'JYT,?| s c֟o7ڵ%>d6]/wwaܐΫ{|-sUK]o~D]1+^q1K\-YiMybs<%0dpIߢ٩T1[AuP[n! h[࿄n?GLtS{s庈j=挬w /zzf8 NfKʹ}Ɠbzٯ@";L]8o_=0 BMMZBR` /la"E!gҭӜ6^%Wln2/Qՠ^_ Y*Hb}<DCWiGwzW8:\z K;K(UL0 J؛ІP]w)20D-o&[fj#.$9u= AZr`0pJP&]wT`ޕ:>3ĩ-׎y hA"JKQs;̎EJy? &]{zƏS?T=i{43t0u-zTiܳڙ;Y@ULINCHSziԕՏ^ިX'0:-4qЛ'p7Lǿ t4g4}|[>0 ޒ%DxBE?=1xIf=Yլ`n m[o&u[[[ÖKgOsay3nQ0aJa-}knCn mV"jtv~InZW5Gyqt%RcIYs~/u~AY å(? j K, |'a3 P]XEPQyBt{B/ðB,waSx 1~^w/s1_PA cȩ_kOk VS*O:ZZ;ՔGJ@7Wxj’wxU.r69SI}|:ժjh1Z/!Up08Q@w*3W $X#Yd1G8:㝈5& ,>P͸G,x Q3xDƃXA9'~# p谔%sVQY,8 R 2+r.7Q"|ZlD |GR6y˜1 NZ"'K 2%`,(21X ЇPGjEη*‰h\v m." \op)=mp$ 0-4ZnЭ+y8\j]奉4O¨:j0bM}]5BJ.!I%Jdk1z罊ajåރR9EA ΉX. J lxe F Qt[zxAJ`7Hytv\m%%7_b7MJppfk9oؑ/if)( bOr6s5ĬMֹ{9[F=եEleƮP"Y1V|]̪q]>]jp k^84rRem*StAkbsDʽ8 nq^Hߧ F|W?.KBl6/.[p_\O<{(׺Ä. 8`B[ X71J[9/wE)[B0qlYȼ@z&z't8),4iC-V$]N3c}™B1 Mmo?_(r:I?)digG uA:19(C)x ul ﺁ~ƃNǺAr.6e-DVqoA Zo>Lrъ8a_߿WY*H34n^8 (D\畔wbs@#ԉ\o>OXkٵ\ A .^_J]xT [3m0ҵKFP~0+ܗf kN=sk)E6N=s1!e]Iิbt%x"m'"#xQDKmr$:\x%ŭx3N^ēCrUs>Y279U–`tY?O?/Y@hӐ3azSވ;R0|̰ib]ƘXj<WBN"-y8 ޲gKjq=(GP(Ϊ\kVi@FyJJwIso]cƏ-pbvjGsGS|vGWv1㚯Q}f|2&Sdr}Z#?NF[Z_Qƣ'݆ VRT .W,W F]<>\(0xţ o"m9QwZEC&UDdb"0֡6xj"&jrG,C)r-D!DO2$H7uTN/RZ>ADJ7/E^FVo>Oe Z1ڽ9R. 1-hWokZE*.h9áтu `Jktj7(9ɉXe  a9,Gy0Us,ӻz`O _J<I]0c)EԈ0ӑm)z>xP]_ZDRkSf9BBK+bc!D޷O`t02lm97tUs [QZӆȃRE1hբbg:3Jj0B/Di>]ѨQD;GH ҬF"庎t+چt/H|G䦋w%4ѨUHI2`WM4.g 0SR7ބ[QAFhBתaW*O۸$eYlYayxWha TB Ml˿e+oͮmɀ&\sӀFqT؝uvzEƪd)dkRicRx7QnLi8' GO'E`o{] #SNgB/ JiMO8H|Hшhv#i -V$ꊫn ?9+ :'yOOgZ۶_CwzLqL>fƙ9NH,$俟II,9(9djqyvLy;hu_o1$&cy_r9'e-fOwFܙ69^vd؆|;|L it@o~/{? Y坿>.~ wd~c_ =4y>ICz4 iS=ҦM{0T1??Zbܻ}]ٳ(gjp2y HJ=x>S@ Ne2ِ|?K9vj Ǒ9ş'pҩ#§W e˕6_C᩷ ? sgy: n.mHchUu+DRtD! ۈ2L"㜋hdH/P[۠k-O~jp.^0llď 1 2ZP9Hsؚ|F-]9~aA*-PԊ -ڷL5"]Xe$9qäA[$\Ǻ=?;{>ַػ=oEV5'vy^G&o'~˩@&]L]L.s]Β.ڮ]u] WgJeO,M(&< Ix_u bwZgO~yvy|kOʼE7<;<;'~Z7'Ij؈=V'w.~;.;p?mRޛLq2?~pc-bƁw#c5Pcч?z*}VE %$JW%[=ǘצ$3>lˬJ=OP1&Ikte۷[!!겤| "_FTz%AHݫԠ4vpg2ىi.E WX, ՖGĤQQC{XkKVP$=+n>N̷$=ITz>d=w~L9ix5LZuWBqyy˳ @Gm}{{?vR%88|!*r^y)7Of$S$:ɸ $+ܡX/ƲKu.i8?/27zlzCPjTμΛG}Wi]ޏ\|}?33Kŀ8«GȔÃ/(n(# {y 6\`⢘sK"C$84w[zJPN˘4c@ Nӛ 䒁+/)F֢)#jjOK%C>&w30I,!'q0!6bq‰ d %bnp-5 h/m`(60F[ t{_ع[՝) 5Nh-gu&^3'7 #,ԌD N glP4cJs2ipNqN;1n8/G/덏t(r+/]k]P+P 5e3}=KzG^y^«DcΦ}Nb*y&K)WH/6%ķCXj`+{ z):̷T:K|+I…,F1rWo@ASMnG`N 4־wkU$l}7ti*ңaDC?t{# Q%NJH o*((l@Z|hڭѸX[t$Fܣ8m sfAr(J#֫$XI@^PT ݺ;[j 3zΏKŮVDRPj^#Wh&i'ȤO>,O-̩ E=wwh!]{y^s /wWT`_ !ŚTFOQ?fk@kڝܛrg'UF7gmq߮o$>L ˘+k"d;4"thbFI|NjTJDt٫+#͍b8/deWM J9" ywqXDLg1e^*ƔK~;66jPɵ2\V\HzL=->,( ȃ&A;R,y*_*2r.i޷#>6tG:(w EQ*qوۊ(/QqL6̊(%҃ie2qѭgśK%q8CUzχ X{M 9Qͅ[%ulY 3Gt RpjZv^\rGRsaEۊa -qĦCmiF޹WWcz_&bwo^s _wh{.%9^*[M* }ޠ,+LxYlڽrAH,ƅ%|\9$ZSƌɣ@%Cr6Jn(iR5to#b F}퍙;45 N\{V75Zh7U/v ;c$Kr儹TⲎTL9c+*p:V K#)jijl![IAlk*J)۳%"\JsGrvh#, Đ@N^FxM?Pj7F,dEoRӏ7o2kvY ͥ5Kit@l~TŰ>01WZOݳu>[v?a;mROa?οFO>.8]K`(2w8P|G:NRR`ǽa` Ha&2O, j` nÝryb%07=n6ζW#]^>;dfA֫EL8xִoٚv}48j uTupQcVD[wԅ(8j~;D!#_fnFێ;21d: ʎ?(F" (I )X!)żnX"`lzw4ac},8I*fHR4:iVޡ%\'z3 1ʲJ(3#$‰-1/MRT-~0\8NI,#BuZw3D@1-r gZ'}4HYTBCԉQZQCP8'}Q 0ƝIgc誯g+?_W|J34|E`8g-dQM\`<<+iQ(ht4*芳^ek֋7Hllq ;&cƊ1JAs(&2븤Jq$,ruC-Q^4S!U&Kyœ 1P8 g}€{kb|̶ݺZCV9a8*mXt9ݏK,6*#%ؒܡg/5pjrDA;~0:-1l1B<҈HSGۮهTK5-L.^8ä"Km$.\MTHA!n٭;;mޛ {ePK|Hޏ-(W̙rr/<@Ca6Y^z: SL0k0eyZ^g(ߏ?*s5K# (_"F 5!*bDaa.f,tEXϝu?eT@0N(P%LDj8' NbDN\U:hXgD:̵'[\j!TL">H1R.rAL'cƴ#$Ej:ng\0 gqM=\,,rγĀkh>I̶ ʅc!QJF=)D75:Fs_D6[T>d \oa! } ޾ l~](|fA(yꖻʂ*ɴGzv>{Q|(1J9"xq wJ=:=QPަ/^_x2uG "[t {Z}=>ɯ^_k%1P.ʋV!c$>h s|_^ WhتR Vb|cC=3*SR. mѲ^t+B*U(JEaU**JrbMLQ $t!0q)(X$x6#qlH)+,ruWj!'B{wdmZUK?-QL# Oʈ p 6*Db0u7`%8nȻ0Kp.k[e u'+y,4aIq5o|DMri%EH#q$֙!U$7 I-0u$#9’ $ $I(/D$ ۿy(b\$])q,5Vh [0nr ;ʪUZX?)ƿ?WZwVSZi 7CDRI ,U9+2NX =HyVmzT͸xwRͫCtw5UA.F5O?RGp SFE׌}\K ŨlGW˧KA5_ᵇ7mjATmoZr%WNuh[twm͍TN-TC*3gkM)[)=ؒ#in94%[ERŋX/3Lk4@wcK]Z\4aǟt}WnuNTRi쀖|ܽ!J$d@+(=ykEG yJG9x9ǰh(wL}<, ɶF;89x8 NMT\^"Q%%g+d"+D+Ol:=]oP^oݓs. yd ~iqF u>Sjűh~Y pů|f7vzQJDr R(CʗBP/Matt@2N(I=ӥh^|OԬfg؀f͗6b'J 8bf4Py)c _hxDx-  e>ʃd=p~RqH'"5 欺龺e>>!c/p#kx+D~w⯉Qk}̷9ᦣ(i~,L\-5ת"ZO[p)de}FBCy_}r .^$wz~n%2Ї.,;yP[jI %0 aI"@ؐ{'5Z M,My"p湮!8?XI^V{aҋyG׽ey)^K vv˅c;i6.&$| GH;K'd}JrOcH)5=&Fd$#]&*do&_'%PH/^ x.zT8uؠ:|P6 6+:%)!t}dr1g OQ =/XBz>̧X0j1sx20TXAOxi%dSd\™5W2) ^.0a @,P$0hcj*@hPF Ҁfʕ5&ް,Y(͉u\v^:Âzy|ϣ:CX@(UgdUZwʕR(ՙ@T =ViH76百֣_67~]=dq5QM;t;.Х1-]ݤR~|L t^rv )c%h1uT=~o]@L*snL,)f 4OQrJRXeQ)hMӴP!w[9v.s6eW];T 2%GÏY) 4Hf#% {NK2[5i7;98_Rsѿ(.vsS_e/( ʩ!K8T0$$^rc`ֲ`  *f)J)dhW4G`RT&E)$N5"*;*_[[!tJlayRóJUH%5W:@lӦ$x3r"6r擸fӘOcUq欅Ѫ|zQ8(kQ0m(9hm띃Fr}ΧxвNOLԓq>uZؽH^]hm;A-y/tuͩbA ZoJQH ;\Ĕyݔg_wB.&6va<[W7+|JJ>UDf!N4q8*&lq+^ۛr,* ,C$< ½2v,h$ϔc( m~t5%#ĕ qfGQӅ׌j3)|,'{d+׍eWFS0?œPbv]?1*D=U+L-w11 H=i}`oI{G)(pH{h' ="GH&]B\hvr l^]bQcԿ͗(k1b9!A~9~5eDmrij'F:gw=eoQ%}s RwXGs 5z|8[(I=Ėz1kIwf2 y5{_ KIgMRPm0ds wlfpd)k@v,e5Jt8=ӒI+'V'ӆ@>Fp4S2c,-S:y银 h yٿSIi5ViY"i^Z<$63.8}A\43)A9Rl9l<_}*B]<y?uZuN+h$sq@^Ԥ)(Uhׁ8tf.'X,3qȴkB.%EU焚f'PoWLSw/T=KQFP/`iW<G ! Z}rkṵRK&\r; 91G8W}ըͅ` |Fn~V zVz86u+ 2%4s@YL,c*65x" ȌXpULZHceZZ{lp{!Э)E y>E-k'HX7ޖI\6U>"fג*,_-]b!pڬ}X9->{>4_^}_ooI^kAwQlU)ߑ qi W6:)[7UN֭-1MY4Yn]h WѓuJt yi~ $I4zdm?3%窅T11 1Rl{ x)QBt"$;I\v@V!CG1[@)[]Ht#A>HZEL~x O[-%Ur(^::|ϔK `4OL`6 )`wx(RȊ64^dF13X(:JgOS@ G][oǒ+_ّR} =8yم׈9T&)i$Rf"M,q=5_Uץe]  P !)P}`Žg/)\ޒE ٮT=x϶-xB̆T!.X !(f)ěg#t[rPޅֵ\%z[2Ͽ2-EgWv$|[H8!MϕV|ɤM!WoSH66g iG9of]kȯa#IE~K%Mp(I(*i((j F+ry.TOz7N@J0wbǷNV?OFKJv4:mh*x-1 Z*rrׄSFV6;7+9R]>*a-#J&."4%ʢ$.jG{n@#w?~IZuwvt>;._ytקCFN ~pFl)n..F_OxV!'w.th<j O71ޟ \ErA H\#܀8ؙ?X<+Ղ#h!9`$SruDi1Or-6B|6Hj,6K  RR_/4*CcyXEf"jr}>=A5 vmg?b䎗>AZQ)|yppvF߶..G~@%eRЕcU+J`(JABEG4`I ̡D?0P}C5M3 OMUZ#=$!r ,K<yu ^ ƞxx"5Jhg24F+MG6 9JRO@$!*(s1 ѐ2a\hJ$/g|MgqoϙR{؁,MsgB<%m `떽MO8/ kObA/bWG˱k|=kB2Aݎ3JDٝڒje7hyq oyN+j Ho{UEyB.)=5n)kww˻̒/{yݼ*"kލ H\b*e7d<;TU.8j \Fcㅰh"8K0Fuq.<3¡C0xU? ۉ+l FM=#o۵z9#v[\m22O]еiŭJ+hشAaAs:6D &ZnY C 媣wjnb4eZ  MTEϟdT=G'$C䒿nJVn?~g=.-?Ix>d\2^.Vsw~PN_yWkA֪?"W|V-ޢסu'%ti&\^J>C }M(keGHZ&o_N'd FaS+M$0̦mcj8"r#ߤ0]o. 7œJ ++pNV&RU;1wj~ڪbcW&÷РB@BF'6YY"=l2Iomȶfe O/.4T@{ rJ)gu [KhiR&hQ]~lEN{SINB㭩v C{+t0}WLo.q.ZI5W$)*킨4#gu;-S>TWr'SEYxg 'jDP),JM+4T L7\K\5F:A2qثyG$#z ҂ h'5dlfC|3|\nmb@ O֨]dGM&a%`5&o^)1ٞE.MY}usT:In.\*[&0gz^n@Ru?l3m&ZeLхPV)A+3WN$SII`' {d59*yrԩBbLFnq_{›S)UxV=Xx;:;#@lo]&oD-3:NJ 5ɣ{25D3T5sYeef<(3O ܐ^gp+jSϙ1oqN([8oYU;T6ۖ+ VMb.k9ogGXY^Jljv!ܖ} ^|Bn"8U;Y~[PnE k)(nkѸS{,l+7k=~\?j2z580.3" qTAl t[$83]7fv}g(#F9@Opt~]I87cF?gk;< '1ֻ_Pes7S0IjC5T0,ȸ_̟/{ C/ 5q~h~ǻ?+3hJJ>ˇ|(~(y9*K@RbHF$BR+e9w(" #9+NȿO&i4>dL@mv# ic_F(~ Ѭ"b615B4,0|̧҃ U1y) :W/ šQqDomUH9TzY xPA#x2+[,{g<^Y+hPm1uJXpDIM:5V$IpxR>nW5b,x]?R2l3{ hoq,}x[>/3\dQН `:ҠbYH1rI(e8,r)NMw5qYR̶B+05P4CJBNV]kzk #+H(PqiX"*" K.x\CqtqugyV%Q ᕵGu@QS*Mt68'J Ca&Q (К܌gxvg2vd_F3z[~m(؋ <6z& q4ĸpI2Mm눭C|/.^N9i;3U)Av,[;Q{0Awt8]Gt7󘶢Q_xy%-jq 80K$f-qIUԐL2ty.l(%gI*!qP+CK@pQ4s23IΰF%)WlX93TB\P ݃p؛y鹕8GʠEu}x{}(H 1~F\^jv{nd:[O7sYuEt+ߣ1Gä.th<oj[1Ie~t3XVC+v?QrAм >yAe`դ,2y$|n;0< ԏϙ:hm⟣ٻOIT!h ` mc7R \>y?5~izZOvPT| *J w"$U:w';MhRM&+<F=HVYuGh9n j5OlzPv Z6R#TER*mW4T0@ETMZV^nJpzJ6e9;şM֗SiԴK}>1/?;şM]^fӱw7 {^ovh3vQFok%"$h%q!ztAF*}NÚd\'gzFZF#Pcä%N>#PD}2"E츷ܶDt_'jF] &g(c>IƔC%*x(@%f$?2P,#"&a]O?N o gԒ8ٴҮ:$*hӋ<5N9.'ؑsN8qPpp#%ʇ4:E '#z6;]9($Jާ=obCF 27-UZY)W/|\x(VY5}CqŠ"cх£sK rua"}$o_6=1YAr zVA땊ה;0I$02s$zJ\DF5"/TA*}X?UJV#J=qWL!z T>a`18F{hcQ?{WF/N6ߊ,0p]L0s3vM[ZwUK~iI-nZ 1vTXU,Vy- ppdI`c+BX7ƣSteBj$KoCMU@T8AT +슑W^ 'E**L~}&Cit."㾓X)[^*]uSK#s@MUKRqtQb0$fɄk sLh$&zN9ҟ[MdPݢ*tUm* P[;O&nxHʆUڅ[+/$ޖzKY2h8y6t5 ibnT+r4wI൑A3TMŦH5 Smd}RM j-6)E+rvˬI*C[(t*tWNiн5R+Xdd֜ &Q /S{q JY>k9D[yűʬ bi£Hc+kWa^rӡz;,㩗]D9Gy+\UnמTmAl㵠ߦ.͊)ܟ{o21<spm5!|>U|1nozgTp2]TUOqc{7ezN|3䢖ٿF2Zb|mH*a電,z1C`g{qdd/!yݿ״MO 1c^Un״8?tugo\^Oo?[΅ =ى[F*e+IC\ԁ:Fy7=Xiz=\Gr=2}YEEuCQrsSc#皟dl *}sT[F>?8˞ƤDBVͼZ{rx04,qo]G` P h:[jyCL4eEdg݈49D$+&]`mAja\ˁƖl}A=,!ɬiz-%`;XnYfIK`VҎeu(pƕp`J]vUH6Xܽ* o }w@ܟj+dJp+;wT@5ʚ^B}oWiZj^ˇpdYv?wNmٟjed=uk4Aa˧-kZ/xnhg>YS.ؒyOvXI lR?}Hs6%L: 43gAO{6h~6G~+jCb{{N̩9}M]`ĮsnTsڑkOT^4H[?afAҌgZ=ޟjP s~<]JiBh*T#jSj쳼VgG$%Ιr1U=ڝgBA>鄃LG=1JJ~=uM-.eŒ-VH^, IE =hoŶ.I7p#؛l=o?4Z9Pěym^ xv[\܋$71nFp%7cF{võ|kZK{Mji"|ϯW侯@&_WJ T7Q}z c&U=ynoηM*c {%F2W=ih45aֆHp66DϬTSźO>YWoC-A;T` Pp> < J ૤tQmԩ QekatU.+A蒤~ Yv= Z[]&5;^NxsVXzRWM[@bͷ]!Mt*j[{z*D2{ţOC_T$ثL<7Ȭ8.=:muhR!3A+ jN!"5QQh)2x4I0ބ d(R6Ea4t U3_DSNmb!ϫ*ƍ5֖VQ]=0Ƙ!;Q5Ql iPLn]qExLwJ݌|Ҁ^}A6z OV{*=?IzxSU@y Jip.W.Of2wَƼE.cY.pȂcPɺ[Mι&">uT^`ƌ{ki[p(.vpZVok 1ؑ m:{%PpcVzDz aVT꽠l3P~ J,\|Ug>fyk?c^碃u*:xNZC2dL]O/g K:JywvyxF\7=n%`ֳ"ae((ēȌek_j^9'> N휈.P\4Lɥ/L*RM2@m,p0"z}OGF a@ z>F)XQ,  IU@ b) ce]BK#L-36zOHsʒ1)nh 2M_VħP4U9/+xz8?Voׅ:Ϯ1ӎbԧocS+vy-oOtdtgW4$B|7|K!J/շ5+EV4\Y"[=U[Z:@oar x2ko].}դ^9/7m(Nq-[J<߂gq+Ey9?:LwN7lEeeT2zYm#قBlڳWɴ㭊> ²1U]ߺ%1UioC /L^ұy +W^xu,&F@9 3gcU̓՗?k2_,y~ѳ y r~iq538~F!?:TgWgs6(I` 51ћOq",Jq wc)l$ޣgEK3D:YVۑW"WœՊ֌qit b݀IepwI_~Tk-o1ӓЂ+[!fz&-W֎bq Y}$DE1\}'Wy'.d[26qLC`$y?%NVĥ0snaY|f~v/DoBOu9,BZIT*Z{Ag% J*{IFПEtODn(FDТpy'FOfbK+&. ElA/?R޻N*ю[Ԩ ?ԧp(2 8nr>&At~u|۝t[f:x: 9#£7LIZRŽ[M5 2`_&Iw7n17K|ìΪ l)IoC"ǞC'kAnDuY\ ߵMGR5{[Zg>;EOi8HP` CSXj3ZɑI3 }ѐ Sc;H{(1[sT"[!Qh2zi/kAX-d@ԣ]YS6v} 6ߔ $?^sIa:Ohk|iG29 oPXɆ}翾ݦxJɆ1e<5O{|p+|MމNyNa,^MBwE1lxn7'zy0;)QKzs~o]O?uRMkI5~ 2ނFo܄<6>̵_>қC##rUFAq -@FUee/t &8^ :SD2+nK^;ܪ Sd`c,6FzfeՍeU!^ZʕbB0N~8 DT%4,:Mɀ+Y \ReP E$N8/50ЪBk-4@ BPPJhu,$QPESb0Q/eT0 x]2M' bbE1)Ul'l% l PxӘmoӃlyV2zP Džq-٢Ft (|B< WZ벻`bR'ʡx؄W}qiPoZ]xXQKC")4S M,T^(rP̂9&={VNE$ I4:OFxI)s(Jm.%W>(,}JkVY:(Q,wP#N)Awӳ)*}_ʼE裿uٲ$W8CXXG N|h2Ȍ7Ɵ=ˠU !+OS%'iͤV .u:3i'QU\JNZ3 G-WVV3gwc aoɁBlKx |W͋do6{7eឈAOTڱA_5,^dγ^/Z6^}2H eEg6EwG&x>OyRq[A$/T'įI`7}HɆtb*vT~kأˤlw% +/cH9,jVKYm5>xJ%E7F >IiǕO~RÛL6,4ȳU6m7/S=AmH|K Q~ן?(9\fkrf嬙2#9*\j+y-{ ~xduoy e5[)YJbV"ċJuً#\xIycUp3pUUcDZ2,;?Zb0Ƙ!7A*p !Y1Yj7&CYY_n7Y±Ѹd4)%=p1 agVS!9{X8` 7A?[fG^s G2zt NIw' 9jW<+&)Igw?WVӃggZ}[u ^FUw_< }Q4)ij02G~00 4 ;@%.9+k 5bV lbVY &@$˵4K`y4^3F BrW\!;rh |:B9Bx#q`۶]'̥7ȬF1,5"N5qapT +A iA ϑp2+̝!Z@T׉{N׬ ^X:q-UEQj嬶2A}5 Zq\*A@ &IYυp4( (Z3n1qj`n7d-5piG.HHR|du[XjCVXQ pSKe`x=1y`ŇQRuH\}I}bEb+a@|{{}( 29S\,!ހ8K?ojMdgà >Gz3|L!#)w Uj+B&fwd<&cWqp+t81q9Xib@zg<:D~3y_y{x FA5P[ɞ5[!.&.s`!Jŋ4zFd8נsX@H\00\s5@间Uӷ)MT ^~ sJpʉ`FjpԘ ^#2\WԽߤE =tW&(,(+ (}FTA(~#UO)Og{a ('\ XK],aRi!3n~m@LZ-urק$267il~0乴_BKq~30ϰp^3ޣZ+u83B,ZLq<`5R T)ujx?lՆolvyqa#h Qb)u\Gn$ƔLw XA}j^i]QeUwᥡ|ws攴#Lj`b0#Ӕ 0Bae aJʋOQ-3X>ESԙ|L?6*#8y 懐m\& `0Rbs eÛSHc\iiYE DcPs0Ky-b[ ~b0%p"gASa Rjb@kڭD^}Vo䖆>wz+mn]FNzpRkO:^bWpTJɄ6BDBP 65Լ_!X?L418 3E_ fXbIF7K4(<(F9gPi`FRb@4^/H/ {E|#F1.Z{Eknp3(xR ERZCw²o/zSt<,9+UJ`)@+FBp^SJ"R:8%"( Y/qkxJ@:ŻMRgb6Ob/2+iX}6dxxXу4 ,WûeXD)-&c~,HN`5W2EN.urQr]vQ#1Kgn!iꀎ_k1au9XN utpng,O%l&I~v8`N4梟xS TN2K7O Hكd?D$f1C-GW 71{86T&L0B VBL@0hz <#59c]k@_ C$Yi5Q,-+Xk 8cUTĈ KXKY#gYӛm+G JI/L`vvkn .#C;5! h`& q!Xs+JJj{T{xPbZ6{w UGbkB%N=t14:XY`Mm1JUnݾSL3)5^ Ru nŚSy2%A=4EwmZl՜u-J7Ŷc%x$[=aN+[YT*,[=L.Ww;Q_q;rka9̙B7 m»osFZ^%fng";9#d~zYcKb ,+GXrр~JQ F:B V.ؾwwj?-lQ-UOO4;U*~;#FZ,sba1J8D1",[a; "'ƛ4{[(Cd)&&KpbH{48RrtUywSӾY6*}: Lu_ZP-cEJM*ȗAnA}YԚ"Gx~_f~j?wfg7[N<,|aKc R6V_ޮRb_XQ&t(~~/zxZڸOg()UbTGÑ/Xv9>ϡHE*EyJzVr>U`Kh#*Nn Q5Š4}GvwݚUnmH7.d\BjV^oc%0B_^Eկ6&XRu?6H_l猧z/טaOdډU-0 T㊑GI8ɥcZIY0RQgBuEι΍"CL4;*v[)15Vc }R;$@hQz#ysZ`mBZk@{nq=)Q2a]ICa;+sqAяa+f9fn On~*9F#FhQ_jtw6"}~ۮX鐂[^ߚb0`Y$'`~V

S< .KD<.m<ֻ `KYr6,i6Cs5+oEѪ9UT3Y'?}R> ޤT x2U#^}/xJ̝ٔ=R70ϳ5wK;l5!DE'>"YNanB@kL<̧fuTϟ?o^ԭT[~>#竏N?OYB*!< 92L172`sU-ΉwUo︓˴J\8gU}"S ,TP5^?_%Ub9_',C狿;މ;czd*AYlayT|GWKT"OUzvV+_#3n$`U#X_Þ2κ3 Meg`4`sد&OF*ejKJ8BTs1?/-zTaM,_=G|)Tv Z V\A~5[*ږWWv`(8̸.?jɃhKY"PY˴ɖ"dD"nMj%g[1\0řjAh,uϊE!RuFbL)>Pm`1aG,0əу v'o=4i4,lqaS$EwՓ\Ol Ftsy|xqu#kQn;ؐJ=ӵ)51hu)c? ~x(+v9o&w7k./M s^G p2Ï zpseʛ|>˗Tߪ^Y. :a˺bV~}㗟O~ݦ!Kc&{i^~АL:sս*1;{%GjA^>6/ iJH } q6usܳF~?i\eJm=3euӔjiJ5Mx{+]Sj0Lph "O9*7\;Vը `+cdW감HϋGsiљس:}3l9G.WVw L"B׽NQ]Nhetw9wϽH36j" 7`Ҷn0Of}kK)WË/f~1ۋ/swoOV<&%}~#TMW_X#,1"Ѐ,xp"J{R$Z0xFpAFq \;vۇq`3EX{q4@"#pD/l~PòЩAVs3ɕ"1mXH%RH~jzU'@г0 bKfKwW1,{|~̷JqW/doA6(1F 9FwX.u6 HAwzc# ȳ.Ad.J[ʩpSA@K8`{pl ȆAjpS:/ɏ{X<hy0yjCǬڐ1IDȘ=罳a!#ք8M8Xф qNgM?`tG &`(Q _I=}8%y2ts $EnOn 8s1+]V|ќl*"VX LYdY3 `6~ugT7P]r _%(xUB#DpQ" 0u!5*=\փ3T97e97{?]'zDS : |w= ^&t;6Y:Y]auyջw/ED/Yh{Qj;&qS73J`u$#aH9a$QX Ԋbu>$mQͫlJTF<=58cI<0ɊL$&@n {/+ Jh'm΁6/B9×'g=M.7L#qsoj|8;cϐ`a܏8!'ϗU; )m^tXb3>Ü`L+_pu֯V>/Ps%;:u>b.dR@YF-75=lFR)#eԍyTĻ}PY,kN[,>c-}N(A)\rL2]yl:z^UJ?{U,V"]#py(zQ T< %Dԋv*O-G|x #8-,ΙҴXd7U ?/ZI://Cmʁ\a٥x5s"܌YĚvObLH*l=UWM u~pJ?~h9%;Y>ĪcH%,n Ɣ(O$k/w\܈=7Gǧt\ H\"#@ &ȃqE,buLR'8~QIǍLgx@)VMNlJz p6Trr10FszjqoՄI9A$"ќ# pKHs-̀L lsP)6 kE.\J$-:D:⬡ceU0cK n[na,ʮO(Ca'\M.r 8 &?|$.-onQ7?o-ߙo~6 -,! .<}g$*Z{Q Lȋɯze?}'/aLR,J%K}TCIHȧb| i2`nW?-'M gp9ɂ~s[&*w_߬6p߯/W_T^x 2 &^9dzz:z[sS}3YٻI%Y0Yb[/}&8D @FѷI/>y}]#MH/`\*t"o8Qᘭ2X;ЁBp@PA1S3Osi0q$`Cz gIR=W ܧ%bvX WA F&[#weQYSߕʥA V+ IA. S r =//H L YOo~OJ1!R5 _T >Tú˶=~deS&:`L4yEH.@zNa(7LM`VUd-j6_M]^{% gPZˬ Y7V h@nA6FNk]蛣0#P *8Z@M$`'H~'vvl8'ќx 0Gz0 GbpJR֔"5]|%]*ZCA=B@ h(ҕdKRP:< Akt?QT6Y{J2iP18QT\TZ+ *N*-QX\№+m ô KMV(].ª rJe%BU*f]eM2qB'iN2jS- N֔ͩ"SxőB|K5A[˦bS3u:eSOK^eeېW.)2Wn1hT bD'u:m3M)/0vmHֆrM)NFNj7EF Iv)xIo-ڐW.I2xtEQh -IvGBTitvvkCB^EETI>Օ =J`olmVW+ٞ]*EvNhu%HONXѳp\ .dN8 'h'pN8 I ~vB+A"ON g;ݕ 'N8 % @>l'@dvNhq%0Js{{`avYPb^P&)HcDCzq?Tz `{"{G -@gvsXO.cD3 b& JQbQwnjd_MV_>jewf~݉&.wN:<-ƭGr[OOέ0ɻVAF=|;>ƤD58d{g~2v9 ȸ!<:c_wsgzYy(#/=D. f&‰ z\z)߉ tuu dput:{g߻[Dd2"C:Adq\t`V{sdǯ<:__2NnALe84zd8˚2Y߽{#KL+}nX _P+*r>įXs- hԕOrA,B˖Ebc{z4;%+dUS4^Vmwwukz(aP~j)8fwLmw . `"\݆ޗ*{0ɤ=]v;%U?U9EGV0%YmT/Q}粁IbҲ ^%OǓ:HW@9G&&EfnupHcg49մR2U1\ɠh&K 2XB[T^Ƙh/ѿ~gaǑ쑺,w*P( SYUMD-]h* ޾bQs lG02_øt'\LX2 x%RM%AE ))ѫR[4i$ԟu,Z/L}cVWH[t$h; 8ٹV/ݘfߟVwi*hV}q~cNڈ=sr2sDIC;+B5V%dyjoH%7e!Nanj|1: ݛ5Y!HansA>>TՓ_ƵfTOB]z@>-i5ARJZ=F*IZսlkWs̊D~9&^vnf."wfzsyvь/}{xEiRDQL ceom4g.?Ldf4 3V:ƲۯwDih&<3fxȴ(1 ZcD t~;vt?_+`y1/›ɍ0ӛ  ËgJ272l/`b$J`ˢzpF?,<?2c~-Y:[a%GzO?腏a8]|/Kg[Wl ;>{p^!5e=T(IcF{U¹d zɬsKb?'hDĹ-#G6ULC&?׬`XV'[[S@V y휇E/zJKJ|[V`WkKvN<ҼB,NOƊ:Å䶊Q%dΩT +Hc9]NM0n&3Ӡ *Faر7\ҙ`d՞pG*:ÝϚM%j8]rrS8Aayo:quZO[,.Q-nNł+zi .Քg"1LY FfcweyWKwzs;`Ъsw@>HN]tE&my7NRBS)#aߟa##xJdN)Hգ?e!%GxdJB\ JH"%j넩?Jwq; )pZT| \ѶSlPKl<Cb%sL a7ysJ萫NsRwny)/t4g#xvO ˆ'}ҭemyǷp8}{qI8DcCQ 8R3΄ڪZ[1q"`2pD buth -5BVkA%O*fuVZֺtA8=}*DE"0 qv7οS͐ܧC:\A<#KBJiʒ !% rͬu 1!5^  0qX-7_ v!}X1 i3R3㥖6{dTiK,u\ ɱ&rA]⤥ISjuƆs)|湠zgdze0ް@He%0yx=`&P'%ŒF2,Z (YEsI1e8reD&`@Q ׁ҄0~LJ?W<;xr0 `-MmWT*v)B)1f`SUfs "f$.I9!8 M^Z~~V02>>w0!/r+z~yytAx}ylo&;RC D[dLj_1aRB>9K7=k ]+,!9XHr\ P(0\h,  QDAV*7V!WEVҳ6kE\jreOCl\ D(fo-uő<r!31yO"uY92[ay-Whޤv%8v۴rGƊ ØR "JDq39iT'ui Yrf9wV)TyRiF-ϕ&|\X1E/}eEs)w}/zhi:`'V z|@Jtr@+&9 .$#̥GGn!pݬ%se9 .<0n$*Zh*Zܸ?oGì>QN}f1O]^w޼L8/WVo (VހY}<;QR(<*%"U_*RKUW_}j;%YŠ)+}T]KVDJz Ff/mJb"u7M?wtrٜ$C,XM9YPA؅47=Hk҃Nn_2?ˀtjL̅ns\Fr7<8S8s?—&.>HS$4_Ȣ#WkK(a;{c(a 3~54|Bu# |? 7̨/Lah f8|[{oW[x?fp#+ *{k8JZJ4G-[yah@s\\\[' SSgLJ5 Fpg+8?*|LoWUgͽiuotU!01ٴ/-?oݶ-_A{k6MJeDV~m:6Dj(=9_޾Q*d4ol}J$pjGhAqBCCrg8`# mH:^y~fr3ĊkџlҔˌ]]ʐOt1%.3wK4f ۶%oI-3!ĆK 3SehР|J+hΘw|*`mT; P]Zߛ,Lië۳cnNCQe;3i+i{57+$]\^~uokkm"וv"ו4x[ksBzc^+ji]l`aN#Ac^ Gi] `aNPH۫?\17ݺILFsʈE2G[Wdw>ٔNuIKLi3+r6Y)[w{6qTRpPN|t깙9G0M5cYrXST)=JzUOkO$ː3=]hDl=6vn\I; 5j1 faE & 4 +`H(a|kQpX[-iiF!/{4 d"&!nUZ-'ib nrHw-Lp"U+ 4j~YCd b-u_3za3ĸPplfm;[O1_Rao/ɓ+Ϯ%8b$l3ʳ09 qpmxМ3q::.hgƅ+|a8)uyVxTzlaD?F/..[cD.Sjm94ǹ %  9X(q6ټ.)3Qh`f:[ X,hvD#B)m#bFar΍YZ:sq؂ju*1YW sB`Tgz_2d$s_C\ &A7zIխYhsX-Z*Y,6 mX!BE ,. ZRJP/ DZRU+$hqFR,? /Nl`9i"hTv+^ K1 a h0#65Z䒦"ŮXsdND ˣѳ_t싃HPpڔa ppx iʥ56Oʙṕ2 G uL"8 "E "* K콽jYHlseLM$E-`]k\z 1ZbVLHS?ހ)Xk2es!: a+!29N3ƬUuqcg39ζ֔ y `o 2VwS~~9smiMvhvS<=V{a~7.XJ'?"Wo?LPjQM>rH#Gt7pO"}*ﰱ$P-~{ps#x4n!Ap05ϛ6ss `8@VĘ)\ W&V6Ȅ{1'3Jp݀mktKǔ <"G+Z=A^)e TrJ NRXsv`4M)U.Acfpa2,5#aNi8$e4zv1#\=hoѺ}PE d!] {W ~9^fI3XI d@2=^ E.fr~3)œ-͌O8VSGQɁL*}aꗫWv5O2Id vO_j-U:ֻC5  &0s{7tCgԪ+/ 1pJ|"6T*z}A@}ղ XC k:ݬ(4pԚfK37\kć&w`C]/t\l^}ᬆwa8y6\]?> +A }s ڮP1&vz,сw'%=_ r+b*Z􌾙i`? K5 g>T zF j o7nŊy[wRn|`d%[z~nDxZ|S;ʷ}bm>}S){3H٫-C~pAֹ}uRPB1w!%%5z׀.ndԕ\TzGywޓ=Zޑ QBdU5<"xs_ _rÜ&8>УÀFAn'ΞWkj lxP<ں+m' m<nOnLL۾enqĄr⼿1!0L8>n )%j?baM]5OPir,MU KPnh&Y9ʵ%I,bR^E蘰W yoxөBh]_#,8,šSBk# ,tN&B#0z)$h^nu}cf:+a燥,A3_XS1e[{U԰7f[V6>} MQeKXOz;1ɒ8x81 KyR5+ց9 /1 ^|4Fu4yE׊b U,ixwM]`i䗰Bʊ%?(P pVQ.h*,w.ܞ.3(/-EqWׄJ%e_|wGեSV+z 7/0Blh*WSNfo\z9N]TI=);VCͅxv嬄O[6/ q&]bvÏ7L{D׸r)ɞN1dvLzf9P^!=B78`Z$S3w={%z'!e~7|jMk߭\u~E nD>R7}K FgݛڋGֈZ= n~"R18=xiov>ɿ9SOu]b,e8#b>FgL=U_^&lȰE&lP }AQW|7ͻH+Q''][^=7E\!'d7A^Rꤿ͛f?3BVe͉"p9~]ocRs/!x{)Е!t_Da & 6G Bݘ*~wb߽J8#.8tW**(5bG]ԃہyG5A.B5cY/#aV)>~PC8.iKdI%oUZ)`5`.9gFªP@Y^EУf}" ͂*V)",3ˀ#`"2v?yI+J&eg{ĠMÏaXuχip{qj `|зPu#斴=`l,E!u&ںQ|Q c!8rDn*# _(8#ui;Z3$@ ɌEP˹"X($C&RJ,MJJ X DKJ#'}XOjxE&0ŕ[77 וc+4(Aq/0^L'}u"?;kwf#x\y%bpF f'QI a:gh ~wPp|6̲GE/z5Pܾb`/Wsw)oe^de0ɢ_~bVv>/)a4 "9# Jinܦ kb1Ks#B1!5H'iXX[@&%JD.-Lr JiB  Ak!ݹ!EaR4VT<)a&Vk^VpDR3'b` P yV&pYqTp-: kAw @\oװ%]:U{,&v [`#]h%352>|~E>, 1E@_}GX𖛻!AD~D.xCp8F?[+;b,@m+ۃ1F@&GAVObd6Y|yj<)I9H+kL߯f3M|<%d< ]"J]p sn頄b yQ"j8Źuo?"M% EGkonդ+0Fpg5{:sl5ƃvz ׵]!bJ^LWJTs{{zս-,e]-w)FGriTW&&CTe&hhֈT׭hK:EOҋ ewހz+f 9=@] 螛N7e뽾kڂW7DfI2&PŸ<1qCd/WrcKR. qeTA{vU =&2y?ǔxzV.2{eΞ5[]cB35lyЍ; Bt֤n5/żVЃhkP#Hen3Ww{ @5飅N4Y՛̋봓׷݅e.﮽|.fzkSv8(|BT\V;[0jY95:͹umlALelnN 9\#ɏdif3O9)KС[с 0D04^Ԅ$-L(qBu MظъN H #@68!{_ Hǩ/='TV#P*_zH *r\*Q ).5qm kzC HqSH)IXb4{b Gr,ZBpUܐ7|H|P𰔇MrA~m)-_#B_՟j3y I_\`E4SYqKzn6¥S+JuLm0ㅝ$9[I⣢}Z#M V)~_ǀ}U)ٍ®>]y%_=}w%/J^^݆n~qMܟɜ{P!?=mݧ`|T^ګߌ^(Ř"GyM0Ef&c1&X6 Wcɏ.w~X^\TL|HgTKǭ׆Tua]Y簶R{異+J3gPMU& ]R3#ŵ(4Sd:i2=lhJ778 q);k(.RȞ7Ӗ{g4XUNyzaSb?nڞl$0Caֲ(NPcj)i A$]7*6ie.@+ABj]8` RR&k^t*o t8.>L=2G-cӐCN{T͖DSy / RsG*$9\wIr>Mfn^NEݥ5 < a:<(]gs]k(ԕ@XzG;1G!I'-jjhWo&upiEGs/PX[U!NSҖ@>DQ:rY-E {!xZ4 н%8?\"k ,o9l7\+O䏆,7+ܻ~6BAD^9! |0׹C=N(53w۬\-fcwGu:e@kk>7Cu k0Qd 8v/e)%k. ޏx    V{YoJ1BӍf^%>"m"mz4.9V )mC&PȂf\^R3}#P Pk_q$YC$=~nmсo?a}oMQXzWkşeo)%{SJ-}e 6su"I.~BmIkO'}Ӝ"ƈC'oiaE}gAУ9(n<0:RO)9S8D|fȱ$sB/bCؓ6Ѧn(>?dpCե-\CDo߇/')|9II _NuQFO8P"1DT TJg}4+-N)Z X0"z5gzp?GnoѪDgxG?.I \UކNH(,/3GO/Fel~ٓ/{eORI_v>g{[Q\ZU&xEi0`TYa )A(m!>>}}ΞMRC> d}[sTEyZy+fxb VqMo %sGW'!W>iگ/fo PF^zwoъ1Z15ϟ!Uv$e'LRv$e'L YNz--vQ8D Tv ԅHt#߶KD,Af݂RFE StbP2;t#5r-uzmt͇V4<@%?0 xUJcj3ˉG6)|JS OݧNR$NRj[SHvSF!F@lU'ŴQ,-$FGT.O}P+>;6UWe/TaW}ⱨCz;wDZsU:Pw!ǵG`lOR!5-oLIi*:@ iO>9j&)9j&hgRic N*+eYxmӽqȼ%3c>{uAVgm39SDwvЌAuWt0|>fTIU_?|ᛯWNgF[_gYg;K ~I{~6)ơclO95 +t@(^Fږڠ%!QPr./TVE[a)acU >E,|?Qvc:Ffgٰ(vp+TjӞwW "e)feٝ5 a8 $ZmQʨJ3t0Sޙ[gq6 |8),h٥_0.o#Dy9C@Yu g7H?O\13Ig@7¢,ZUZ`b ]r9 zWk zi~[O-BN#Bnm]Or] c2ETViFˊrAWWʻ嚪+ëx` I???\׷aOq!n?5aBj_ 7q+Mpwu"]᧴X|9_|U_9ܟ^OɆ$"_)f@bΖocMWpAv_()2cep~'8rI3xd.2= )wAri$BD6*8KFJ_*xR'!Gv᫳ϋ3% l;8kS8DT Z`q>U{N%Gfh:Y`p0T#]S&PJE 5wa|jɎm O;ei+#sA)RpʒCjaX$p.Y&0>-K-ΤǩvJ:JEAy_uj!cw"pO t{S\5Ae5oZE յ+7P1Mi\3(:FH+%(kQ3Iδ&SɵBC RX=j +8`i$H}{7x K|NGOB7mbo3&@i @$m 6Lj F^ާJg Mc##t%Q 38@0|D J6ZYsup=`YL? _/?ܚLA,o6x9"^'Jˤ6jI@pjEڲ%Rop 46$>%UEwYd '9*)Kiz'9ADԐ `P5"9ԐAja $'i5aN>ъf$G?[{ŕ1iHne1Zo`8(&B"i`wFI\Zcb]#*F&Pkb1T&&\ѵZjRI䬍M޺Jy$ x DB 3)s5NԖg;NSݠƄk܅k?;bX5ܢqqkwyizf!<>^F۱ɠMUox>u9ycyVGš&mQX)Hfe1(Ա I :JB76l0W 1 #=F}|-^4`h%kgV:%cxUQElnfCu==sHE-ۏ4 15Ԛj\a=gMFqGceOuɕsYc( ]dV\gawn#2mu[4]ڏxMe7`1*݄^YYki rVI4ܪG[s[͊W߾](j~ݚjIa #c&!(7^axd&~=n;#LmYT}b`TTvaeS81 㲛~Y&tNg+!!Yp-7>+%n&9yY@xɸ#jDhbfB n&/.,׌L~/o/%uP>'/Bq^V:Nz?8/~shP!ؘ),!Ӑ -if` 1-(\w E cu;x|/ ҟs(2k"[ncR q42bHJ[QZ܂e_Frc09g2Jf =  g'a"$Lsu3,ZA@f(AyyQ '0 E: 0䧄u!ލ\* Yun͑M vXr!FP(a5Rx[ d7TFʒ`y:]3K%_@y4&/䮺nYnAGMC]#i%p_+m~Q7O_A(E,e_ͱ o$aˣ]~R3Q; 6q|1 Bx1c*<6| \^0[)JB\' _m{\zz]|3젻.phۊD5oqԒW&e(ڀC Ř8Բ*qJ][ ɻN7a1ɚUZm|»S@RsgNAe=v6{I䢏ΩM$z-oωDd"/|+AĄ$0zAFĢ3\wܦ(:<䯇}}32I 'ID#<1t"($R +z (1 Ђ0 ε?ۿ q]d fM@!hURɗΏ f䉽p  )HDL3-2$8WCv5Vh C2F*k SXm"P"$ bP6S`Ougr҆h7*[DKn2BM)"%3D a\QN=/Ÿs5/r+#=(Fi2 I"7p3e7n`4(2'&'_͚R<@D![? oipO5crb>vkH ѱ8nԭ{r?y|#!_@ {<T~Pc&\ca\3ƨ!gE{JYLfYbyA !cT0uf!O,YTe "]vEo ^7 H݀#h{[$kŒ9p(G+2LzdO o& o!*9ǷaLX*O%h-.1|E}<{mmo8 -$r̛1o60fC0wևZ=wv=< fIɳ6^4h4{gK  @ϗ$AmH4^ݝsNyN1WoWR%!"q{{CaV8o`E1mѝ+t| 2~hIk1Ĉk7YJ{X?W I'y;|'y1F'$Phl b[z5(5PTEFn){&_:\ (-2 i`Xf0ՆSL%8o AXbP ,p ^2@Mg թS cf^3ښ8Mj + jK7@ n>]agAvjFOt/t"LiB+1g P\K&U r£Ps2rDs4s_ϑhHp ̳TX({D=qh9du`P!@V;2ii]J('R1 Gb&X§A %Z#5zoO5&£:F7 -+D'y&0R'vş-C9X{=FRk9/7kV6ݢ_à݆m6r{diAFTx}1b+HwR|3x24ޭLtk1*AwyٻfiOS k/#ʒGoOE>+-qkv[{-Z>L1=66jo{gq2w;f?KKkE]>׎D)\z$㭽̳ib)YUfEaA)e('DtRkp9uQc.{_!uL(B`/ӯ}/59y߼yhD7pfcU4@YW2pԑlb1MW1KFqɻv[(ܮoެr985Ut;D!xЎ#(%+b }GULT-V"|og9Ʊ^6L0+ Aa=0N8+ϲWz$+DRV/968ވXތk>neg2_|)9¯-JFjˢg2A1Iș!me= IubNMjd:qSaUWQ h(kōu߯u޴%F=,x_^:],?Vclc_]#y\H(cu6gΑU] /+ xobf.eo ?pUEwU_IU[z&gbdXϕxVqO! xQ;.0p+xO˗7!<#}MhYb( m@b,!Λ1X _D|@b y"%SF@!GAĎD wiyMŗ n)$䙋hLy$zRM%}>J`^=3|1g\˞2'{CqǠb,uĒexLt;.;ĉKG=0=o˗PvgdeYVFy`xoV!~tNK Ea޵RN=˙iVH5vK.gPjJ@_0 DL_JKֱdIbɫʏuTt?ӛ%PLt!uSiJYyT  ehA$ՅWfZXm^!5`lWE E QHBEš%SPjby fr~60Eju 7jlIs`5WH f@* α!"ĔyRdCؑ]~dajP- IFZT x[ d!MN$ b44ϡ.FP ^3Q jZLrw{R7((2UsFHh(Bp\u O\Xd%\"ׄÚyT1LϪ.ď‹ށf߄]BKLIH,) cɤ,wVrCb#1.әU3kEՃ0L@ S> HI Q ESW[3o;\+}fl6+nϑ-"K)Į̶b:.? .Ia>OWεy A|,g-SI:^v6 :ToΈ^:DAUW|PE/8A#/cXZ:nFCiU̓cZaaۻ#)C& >;g;b&Wjq"I!xN="qd&q4 Qlֶ7{NXV0m4H240sS=BNIA0 oc*ST%ҧ85W9/A=r!xp "E,J \vȻ[L\ n :MĔh N`yhI,ckd$MV- 58&ZS̎JP0MN+Q|kCV鵩~疫Ǘxqo>tA2BR=$os/̳&^Vч1 s} ;@x[拵[f#}xW+/~^o~f t h sJA{?b񐢰iD+m|w85, '!aFR#S5&\@.`C*4aCFe EsE*jOaa3-hO&C.!l3XErA$*+lx(A lLJWNfGz:(mFCq R!rK mrJʌ, | |p0,U%iX6|}mn󒙆d3e<_MiCw"v[p<~rq~vOn` 8 g̎g_f,XN>{eX!P< +N P`ͥ,.cg~맜p~iS`RjG</q?KAhEyi{a׿tG,9GH ua$c42ŇĈ ݕԘƽW ZXihl|}: M]Ɣ`4"v; nu,nb\Ojƻ3C1,ȢN"-xH , >^_+{>: 䏧U,{?x3{x 1@UW>Y1iktVƽjtwade<]?V-me&Kx㱌-skcֹ פAF=oMw?Ĥ@kT4͡:,+~u;7Q%֜u#Ͼ7۟{$<#UJ.o;9W[` m3PXMzwwNmsKRXbL6׮s[oi8 f Ƈwf" TFfലm(:n7;z]yl7 *ߴnkX<^.D )ڽ{G$Ew ƃ1 6\^p-t"!.8?" ȿKu>: '%j|xwVo} ck0uਖ\#;01xr$#bQ;0m2œ1tnBB<8lĒTDY"vN2!R5jLKnB*whWfB7vUVf %/\srC]I71`zlbVJpI)fRA-`b(#3B%8A47Aq9LŠl5VOlKrcJ)y*P-`y}gU߯ۿ(&bl4KI(˩6jm\ʳR*rTS9m} \iWMz<SκRՔ]y< 1A,6|<ܣKvLn\Sӫ-@_t||U.3Kx2׫H oశ~ʃ9L-^O΂LKW I c)`R$W4Yqm\EpҀ ؕq*vǷJ{18 X[k`EJE#^/ Χ҅Q̾D9+ӖM軪B&M1K>Tq2ɧPmFTQ.]bKI'cb*5n!jëljƆٮYV"p51!6aT $Sz ߛamW-'VeE4'*c[oV;(UXQtw70Yß39SXP6raTNvb27ީxrLe{?N 'ݔ9V#$!߹V).d+m=*$AD&VPv#rZiȁ'ㅷg[Pҳ |z[EbC}8 L MECE4XXO&E ~,[ :B{/[GV eVJ`&HX(Ia9V:La}.|mɡzٵyբVٻ<"NdB n!GFi,=T6uϑ"U >oӾ9"9J }vV%nQPgY lr὞/_xp38 rT/&A/n<?58=w0bqg|iP R ]9-0xE1d3™WwVEgv.M{%DꛃqxoSAx'|UzP}1Ռ2µ'W#]3|K3]Ѵ~WME+^{ATuˎ>Z!-jQԵ$aԟ6D[8iwKKr xR)4"-)W@G~Y1aȻ~1@ XE2㓒}2Xl1sM"P_wHys&{vS$'o~X?_hq~Ӕ)GAS5>n3,6 a:$_5:ApK%LJLbR 4X}"}0,Nu޻ 1Y<( X0~XGe.Hʱb j1T>4#1KxAV6kpUVe .ՒۚD@53ͨ0TrM˭Iʜ!670Ζn2Ҫ;zp!-E[-Sd;ocQ6F彍Ւ&)n[Q0*S^h9^;[f$*kZvTjb˅Υ#j<=JM!C%btrVu'^z`"a;2/q.*X彎`] Tݺ.PYU 1T9(Z u f3BѽڧZX'[QE)]F.Yu2u2WoɨNf 2OlŲ]""%1[ܪiki< 5+yj5p:`% JhNP C  Ӄ=s} "~D~/5xO !t}?31ܬyN@`m-$~w}#\ "Ttg-;yŢ6$Ku&XQMj.Ҏ'"P5Ncv)ܣH6k(8TANJf9P+iOm_ Ì͗jP̨H_\Cx(զ8C26xCy.%7V['Pw8;}):uWթNJ=𺗿: d "f{ 1(iH P+B89FhQ߿Etq-DtM0b!%Ōi@RWTjҹd=P>Q!:U𖒩⌓}Ff.@T[N)0Yg2HNQ"RB& q4x$& sJRv0 `VRFFov3U ٿonoWڠu[wb lZleh>&wBB!8EMޫz[jXզ4=U=Ϟg!Ƭnxfh"nnׇ՟|- <|PsHqnWbuR1E}pG}\&NS0Ov(uuf;gj/nՠmeҐWuJQDf$ru ه_yA}QLd #}%@c T2z+aNsNt#Z/txptֿMaxƹI^{'H'In<#1?"5Ì][敼 Ik?{J[6؏3_~4Ivvy`؝B̝WDi*ؔcc4œ(J>/s+EՈU{x`GgeْWyt9h93H.<VEmOCΔ#Z~h3Αtс_֖H DSk@KMeCAX%D#&D' \ZJ&~aؤ% MxVA{T}S7aN*P„/5 qHɕ#Pe5))vzUFOH}Pԇo释`u4sCCbg_WZ bnTKXg45a&1FhÉ(,fcyErNI)UXx$rz p.cKaA۠lkY+Jm\߳vPEtQdWL5杚>[rSelY*r!X)R|. m W:ɰR6WOP~kMh3r WsSGw %D|Gva*2e].eR/$$jl_t[}i!ƌwV_~4Ǣulx|`k"1gKUD]j#bnFg xd<|\X̜l4,fhuS4̔<H.߷T(f<㼉"21Te?<BwAxGybTRݎbt˺ g LJoirf]N+ܩv <ˁ3.m$1z LɠaG:)T!RLcT}D%aSCH\-Wcn*PRz(=%()sSG"_A6m ;0: 9AĹ'VQƉX!T )0BcKjwЋ/UceTLK>Jjݬq+(nuu!QgtaUc=6SSQG4 //48Z ߖӺ`ns!E)0SEzA s{dUw` RomGի&%>m7ɬ#Cn^ Ihz*݌a4;ahl>4/p{x;w4v8}c׏>4m协Ɩjbl8{Z_1:[G->^گv11J^Fh]{ |){~p]F1ڹr|vD?9zJJRz,J16A'EΖg?O cSz6#81OSE2 ?1(&%c2JbXT2 B"/ v z2 رLKZO9_~\k=ѩ+?t8|2Y?z oU@p΃q B9 1T&P5j޺N uw݋ԹKMӕ!kW w HX4%:)@rQ`nG3`L)RkA_$ޑIf?oNcݪ Y?(P<q,{"+_ X.oIP ]Ln I4 F$-4b6;$qӞc.-$5Sגċ: /׍" S Y k2)ayl, ϦiOZVK0aIGH4 [gbTg_(!T?|WU_<p9_KmS9fRX#bvsE\M׳Ꚑ(K┛3ьHȡ!\Et:iQ'&$ĨN3X9D= m#[ y*SĩZ7جкb:bNy)Ϻ$n94䕫holwK˸nT*k5!73)8eBo%4ӵ`.1VKLdX:Vt^z9Z4{Fh X/G=s:\TΗCv= ;EW0>ԂwɳڄZ9xIUXjUzN؃lcsUѨσŁP9>3adBr e4ERd;lA,En3ƲWia<^b{& I']!+rslDkb#Ɋp=}0ޤ  tvfw ad;<@WΉaf:W.BG |:qW*)಼uEiL7OOLy]kĥL.JOzuØ#=p\i,g,2z~fL(%ޔrUZ n4sSZDLz%;ܠ ];JҊj#!nSV :T(\[NPp `[f>F!Ӆ\#xۣ1w{ulڻ#MkG']\ j%- \Y_]Vg7VfY^ ba5^{Z܄R.Yo"ﯯ? Չ?"5ݽ#Ɗ_XR{of%u]Yi̭+j\,7}s|`Q{Vu=~,oooʷOEjoCYok<# |!fcYTez@5cM_1{<7j)1fw(`nuy#O8;a?& av꽯_r ` 7G0?U`xN&ݛ OABRX 51G>2~zûV_?Yʖ7XRt5X n~/3n]_Zgt,~+&T2_]GkS^guk88Ȑf1QRkUEj^{{M6(xA&7H??ɜXm0 (rsǝ@պY/Ɩ駆fwqx1 v._E>YMw.^9{ɕ.IA)*?h䏓/>\GkuToܛWy!N4[A%8b/yAO|#9mԆOڭ&IN"xrލ]=Vir} *Kj(ΕhcD|Xe0a}P`54ZO%~db(7bN:젽pM)޽wq6^Vw^c/C:2җpwsv:ְ㋀4P*AF= ݓ®zq!`o8˾hA~?[ 5^ZT@S,x!ߞW ;AnLxnWwI_''ZBvH^gi3:JB)Fgux-UYhAa 4*@Y +!fBm*WN)Ysp Z_mA|1̟ #XeQXd)0 q ԔiES rZF@"|Ov _jm |1\ӭ͗(DFW9MuWWx|װ #\]vn./X_7nfbO~{O@bt1(&B`w{5}87dǞ; O6Ra+iJ.o:Gi=9e0gSgB2o!~@L epC[:6 t~Rsݠ/xL$D3TIJ3%'HcB!>>?hoTf&=+I;DzҀkzT(; c&ȂKSRXa1 V@!J0ׂ$M5)+oO @Z: 5,.@DuJ(J\:cV.$+ATCZ$Pfiޟcí,33m dЍ@禌c]OQu1-izƋƤĤ1s'~d#.8q(&]hԣsQ;1)4Œ tQ6FAgJr?uڛWˎ郚>t(po N('8J)g,|y"fl'n"%tO`}@}~D{WvܐnC;$*RW3T IPR%-,uԿ:uy aB00 q]"@O:V(E @*|Ta/zkEEq4RʰB YC*d͢!#D`]m̛QD)Ax!\Ju /(,aJY g"(?J )qB$"$oҫwcLi$/G (ڤ ku3 ynӮmv-[&3Vq r2z"YD]ۋe.F^,M4 jX縁PkmCzZk^= "s菺⹧P`eotiԌ!8/-"i}m"2blyͅZ'ۺs=>+o7>|x:^Y$f pH8@71Wڟ(Tָ?ZB JD)n~Wpهo :d䶳M>)p\7/؋C]|yߨ6%UJ.7*jD#(əCbAXN0n.Mɩ_a#(t0@, HC(# $a |Op,&@":wMum426"T0k*X덏~8G]bfGc.MO5V#!?G!!=5hօ(xW3j1bUr|Agϯ^lrƭQbg7ҨF1MOws4i! }{ siz#C*6ԥvcj-$${UBCG#pDn]h_ Aɘi<F!< 6 ճ:or p. g2|_mM"cS;Ba1!Ѧ*Pk^h y)߿j}jαBR@JQ)Tc()3 ל"-KOe #fbncxuѝP4^qsYͩT揌X6~&5>HǼicɭ?@側+Ӣ|HCJѰxVz_m p71XC܈3f*-. w?-$t0]ĝukU,(8_.yxřƤwcѲtb- eŭ}i&%,XZ6W"k:|ns @9h/3t^,iH J;tKWLKQ=ĖÀtCmҩpm[ .MQeJ3^1"$"#F yw{r՝+p)hY9ESG$۝gzhK` ]ꮝҊ*C'njsaۑs!jI:lc &oWD + dnzpo3WӋTL@t2$Ԏ=v>XjIFf#G1`zi'2YAb1 +,XpTcҙJ;PApρP Pٛ4*[^\h}xlWRv)R.&+ 2{y S:Nȼ '?&G* "$Z}u>(hhMa!7u'fK#zBRI]( EwQP/͎2`.qZȢmi9y|,j(wōAYmL/&Of/Ha<>#%zdj|ΑR> so^:<;E5x >O?(5|^' #A ^цvvMeXWeoX}$F fdepxvxxzp ->؇930_ bOPݧN"*PPgAyRvQ?N/z]%.j~7KrCrcB>J!]Ƚ > 8d?"E z.U A$]̮q}>^9^ O:.RA q%d>޸7K8 Ή'im-LUҦC8^^ D2Dr c {k_o|=7[3}8IG+k../|ưqU'HʁmH1<`"' oEPa^ÉDuU6"}mF$~m_3y kd>G|w.A#b=bBy/D2vY q=7J怾Р]oOW+[`r/MŐ3倀R %,ᰐ. Vks V!~G!Krb߬; Mz~BLEw]N!*^b1ҵ4T"}ʞHhD0gyGA_*mM4yC߂!X\!,QnȰyz$D0_*~;*a>{so?4xx>v}Ytr'Pf!nh.;}u'n}|==\\l4] eE`1rEWW c9u?/'{`si:eUqUa_AV%3\~cp]17Mqo24FZHOC bi`ƍ#%7 np n1)1{l'am[&RǍ}Z*~U,VF?`)8y J@}Mp?#dZd<kP)`P˲{}6޵{|E"h~EWr%ӅG]*lVS)Ê{0{ N=%/\Ur@<%kYn_2 媋<ƫD#$PjYג3Z+#bjiԵeUBL˱=MG+#c|0ъz^a!gb-TXƜIT$bU78wo bZ#%*^M9%MZ›k x `(pAńԕIcꦑBRi[qzLd\^j5*Ъϲܲ{ʸ :25VE{8TIg@=v&dxL6߾߶x\|:t})p3k*x$i_LeD)bnVȯ_>5zh *MP)hۡ(/.G 4nv UFwO E^ /}u_eL9pBu=g]}rԇ62M{g\H](+_57Iܸ_UW gg:us8{d D;rgV.v I`tJB'"4bp '7ȅI+ WZϼ+`m$'omE}3WwULNionxC_PM/ EAbJ *k+!RA1=c5ܒ)$$j;;3V .V7H!DVEU ͯqY=B('RϴҠсpuZ_H@<ȳہNcY4򃾙~~E[KWbߎoG^42n Λ /|~pP<ܚf_W "CA<բ hm6/_2 ٍzuޞ}&lI-J 1( ( c,P#r$ R*!<;9bD>ʝ+p_<"D"0:GuAgȓN< "\G9q8sTv22+r9!2!$ZsBD4)v(~r8vXed>Q2~9)s( $٭lSjYf`0} ݷco1PEKAO!oiTH[\!ΑRS>19}% )ss$!VMkӵm.n[td7m %/]ytVC-@bg@UNowH_../zu_s)ջmQ5+CPfNB9V)ݠ0+CpG0{}4f Pݝ~p}m+ _}֬#[?"~3VN#)g %mLb*[wy՝1*jnlq!3"EVA7mXuzxLfS6]VardW(+٧SdQ)Wr/$K9%HZ~}+/ 68,[^9rOdas^=O(BqE/cK")$FqN*QCLl & N砀(K ;6$a [w1.]L`)N# 9W׻zg <k.2a}9ryЧ̣]e7us~:83iOʅp:εt^vѦ<%?=SKn6E{؊Շow czIb ƗrcAS%G[ Kr26..gJk>-֭%|f7Du^ϩ\ȫAXa|IlB.yKBAuJ吲2챞ĝ#Pɣʁ_md@v)ɓi-5r&VH*b\&5 T}^қ~baT^PgK(/̖wExRjg-NwC%^9p@ ȷE_x||bh+o+aF}pD)oHBFhe>۞t:<_6邁 /PʱjA>o9z(w$r*v/9R(0TN%rACT8F3Pr4!m,3=&o_?]\b'Ņ'')3y{"XԜL>җ=@m >o/OO |L['V*M$r/D: 3YC탈j{u{-P_1o`oUonͧߴ=8l1x*ZNd/g9Xt&ebYIw:(('gyٳp?4}Ư'\9B:wp[@g MN7la6W᠞pЎžwUEJlT!=&]hKҷ\çf͘A9BK5᪢YN!~LUpmRm;`bd(R ||EAH ]x/NGGj>1笩ny(`T il%*`*!C0W.1F+;Fj/p)Bs}FU"12oy;S5lbX 7rVųcArbQt *n[2iW6-;XlwjTc$V;rYfշ؏+b%g8=KH̰pl)jݬQ|Uj(`8”cmkwxT[$C|TݸLl.d&x.H9'Cǹ(CRg 3 3,כ–g5Boۇd<"\:_{<b2Z.x*I0Ù(/Z9BD!}I_2x_FpG~IBPnK6ѿ$jG5@Y? ZY.K vwIEBy_P]P\Ag,빜3{k3U'{ ~l[v2e(37_D@fcaGWf._ 8ނx_6Xrro#.L/ؠHb/&s}"r^2:Y.;ga:9:]At% q)_}/!梿H 6 d J+&V*6+56ZjMha+g HCk2 AZЪ1dTXQ7˥>,޻g*g4 QZ"* Y#ژ!55VLp *1w_c%>o.WXJԹZXÝIHQ %ye57-K5^ITZ1R NaC5`^AE@5V;TkjY#0[M? +XiD@ƕL&]Wy\=r\^;x4+޵q+"Kq.~1OmzPK=DZrE;\JZI\EZ@j9|pHg75TQrWkhK׷x7KF3sw~4x%]k ah3SD*h2 ⓧ>*81//u@@] iǗ=۞OsI0%/?j]Do&5?5ޑ%'bhGO0Ik@*kIRx]g^N<+u;2Ncb8/ +ꅉwNۚ.KJ~bvٍdCKNY]\t?n1z 0O@b > u+D,\AyL~81I|3`j!Ao -j ~4MBC24Tr0$̤h΍Y;1>3R0E9#i?ϱF\^yi ]VPb2~7+=\7CEhfB1e5qXHAE1!Ϫbp*u;Kt' SL(Օ uSt[VuGz:==$ם爞0 aCH"3a oiq.xsRv ^z T>sFЃ-\ЛjUA檖PAfqaK0 ^vqoBe`,e${Xu)|7oT*?M^Uy; "7JW0W+y0{s$3@:S߮LWX{L `l:%+\rpw7 ="FB/&O8ٚa7G:<8B)ސųh=t" hJʁrR&l$dXմ{.V(*˙~͌-џCӑgy|#(l7!>zvl՛^VvR3ǰdBp@A#pbao/{{)aJ ʾ33"BqXaqqN(HϟM-SGQhi$P刎(bFt L߈U2Gk)/n9=k}qnU`AE8*IG.S,'4yIcT)zEL r8 ĺ@?#{r?X4q/f.i9eq!gu3SրGP9՝Ias՜y nH;/䡐R$ џkIʛ_~풰rZWGF5.{/wɦ嘞)A1r5xxF% f(ǝsjwIOm$Îuq'YuNF'!O3c&qMNL4IfW=~9] ':xC|s~x#o:L._4Eci.TIXmp:ކhAh}àU @KV0~p`^pX>*oömV mb ,)a&SNXfH 2CR"+dDgPLu׏CJuz3)^Jx1q-KeJy:Qi*ՙ2/"HB\NۺW :UG[@a9ñ;7䟤nNgT;%W sn"7aA劧VLRL'`80ʄЬ2FPlxY(:gY! 7%J CTAKGS,9ϖcA\Cb2iłs$2qR2d ÷DQr-8RG 4D*<: ,f 72 C`k`j:ɛMd$p!0$,>K[InI(U??;mU1FVSnf䥱)jGܓr TH+L)&- <\C\kʑn1d5wo dBekU%&^"}2Y( TrB*xbt(UUcz86x)F#..\\O06V;ȓXZݱF4q5AJħ!orom/v3Xl4q.5g\V8cM^HB ,ȕgPϪp`:FlD7~I\0}T|v'ymN/ikp0,W`KbaOA#Wx 7W3ft+n. faX]zOWכ7}b!0'E6]a5RX$٭ gϥ߄sz~"9Ԟ(P QkHC^ҩBp[֍ !`b:uۨbNm[#[Uhh^q;nzż4&dWDʵZx1\|DҜxx`}%g> iSȔyEy8A|bgI=MYߎ/}4-RQ%k >.{ѢMAC  k4fw%U8l<]/Z 8@`wer(er;\TaT ${5jL8;IΩ'L:G9n;|~s枛'A).a\Q:T ]:o# jMBj]RHowЊL 3>'Y&K_͟7s] [3EC t1_@{Q$یC ws9/j ZųX50Ź-=TӐBQcdiEܚ#yV"Z(_Y5ٙ'iWI^)nVZ!M2]MկK 3MOS0%j9M& b$(٩ #$)UQm/_sG{D0! $, D8_t$wz]2O@'c'.n?QJtj⹶!縷.nj%25H5^2vO#FB8eG#.XV!#&n9ki;b?ӣ{$ mIB<$hK}ysu WGF\a{81I|3@f~~T۠An"a{?R-i *KFK)q I!%>eЄ:G)I?׃סe^rxP+ Z/ UGYrj7~K"$: 5C|b\L (EP*.390y&)Y2li8YN3lȥ&¨Bچ`.O!l"! 4 &DA$ptwqH_H*> f}anV$b+ь<,O U,֯VIIe"@ ch<ŸjƌpEiəTkާ[ 2ﺗ#KlT"̤1Foh4 =WTW^# xC1V.s3FW<`W~|ٸt}V˲+2Q(c$2SbcD̫dIEdkHQmBP]5snd'XG %NBI&b׀9_&QPK+E7?QL8O*<ߪzE$2~͘ywEi5JDD<7ҰYEK?sΙ%Ee%LVf#LNO2e:]Hqr.-mUly76cESN:Xr9I-@^r1 WHR"W=xroPx"W;t#ud{kFd'."[U= IjF* ӋnqUO]ɘWuF e4#X;Uݟjw:VcܔdPv\c OB;$lpwmR09^3‘‘3nƗw(7mXA.@UkbNoı/s3X&u~7kr,[eA û UGQjkQ7B7Z[;qj4;nmYs Xڭ:lt$)y#6J:kp!/he'ՂwŐ&q]A*Aע<ْ.֞AsPVv7o5o DJUeQߊMxZҩxpۨ\iؑL8:jlUF<  w 8xsK!bɦL͸5rp>}T?-2=NSRu(!i>z]Nزl53Zۄ9r\^F X0&LY.N﵈!9ɝ~T753_p#o9eHΙPA;Y&kc6d9=57[+NĽy40mDgNjBF06rQ`PXyh{"ţL"ZAW'fLrPxo:t׾l\noXhy43s9@~ou9/2u?e) vd,o;C]r4Nr0m ^^~YܵcԶh*)S~$V X7v^Z?_K-(y)͙wV0pTI 򚯬s 9sJmSK7|kskL8p:Jk@87-?vUo[S0v?d7NsCރ#O]ɘ!tFQ}b1Aڻ?ߎ'9Jgpӎ׭~ Xa.!)T Eg=R8R8bX0/5Ý`n`5):b3w Y6&QcL riL%ȣQ9hGa5G$J*l2^2<$7Pb;i1YfSJEg'Q TȔ 1Χ@!,ZptbiHk Eh=DT+C-)Zz3dW kZ ;GFgҊq^Tՠf6|"]_&ϲ~t~~vǯڧ}J,RO 9zdZO!<:X.v|cnlσPjz%HrrkI0>n/sGӕaZw3 upiNj5Y{iU%A(rP2;|)*}&nM+}L&%~fGO:ՁzO=en=ݺ洍lķO~VL݅qZ f}bN1&麒 F ¯2r#Ok;_ 4ObL}uy%iŗI’U-"7LPΠQ{Wy^4z4 ȨnλW?K":&* <j-2w!; bA,DQ q^kÿ'wAYsų%ٯ痗Ņo8hЙ`®=*l3|__KrgnȗgהK,Uz !lsV1.պUlp^*+/Yͻnž5Uizy ӫ%~UPUOI[ߏ7"?pK 2Ug#N [ XYOXOjXjwQtS8H% UI!dD]Qrs2"jX`6p% R1e]bIm)PdY[;l=9a[3]sz#WŮhdx/W>gl$,f6;OXT0ODZS:욘jEW9,-월Nt *f]PB]!RfQxR B!ІGbb>cҹsWڬ|ePnoe4:QPl7uGbJd3m8Skf|S)M0ߔS܌*gU/c5r$6$E/%AQ2+"fRZi#nw Q)uH0#3)#i~%j,b!KJ 5x$yHx:3SH] +R(Ågg\XXG/ݵ/*V/ce —p]3Ҁq|>IkJ؆Œ;)HldO̴ Q$OQJo;L:6]|6:Á\lH~"QmCTBϴ`puV数K7j`R V^鍅Aʵɭ콞)h;Vwj1 #ui(r)D'Zh[mTyxbRS-T駠T9~Me$C4ޛTc͟޻ד1R4358oF[ynZ\%`+R ud3R86 g/.N@w3h CUMմ"0iSwĽp" o:Pو j; [SmF`u;.A,ϻ^^0pj=Պ=z sՋrM}Z_ܺ:%h_{m߇ϗq(X<6?5ˠOgxB|S^m# nƫUPy9CWteַz@nym" MŦn{ܩy74q1n21onH]{m MŦHX#GMcn21on"[{KwBs].aE)Q\|w+T[.\~.eed3<~?/u2>_]\9^|U^/ļXr1}wd?\O!u*F,R7BiHXs7|H7bYljW+@#gBn ⲅ@$'{R;,)Pt.ˁ<]]ru(FಱiWWvWg(@{!a4#xLx#cvbCG$N;6+}gZ0;R׼aX<RHӹ{ A` HD~/h.xMMk26U/h`';.eaJEA~4Vv0y~{H+n E(H/wclR?u׼a=yi+Z 52EKJ(IJ73o(&S3Z LS'XʝBvef2ikްy|V3j&W,t.;x&b9MjH$F&\9HA?{WGn C/zp%P<8f]',iBSlOQD]"@ ըU>\-.ݳOWSMZ,a-(K'/\^^8)vۗն҆Td+΃d[ i%Vky`H*ᐽ*t) !@Ӣ{AU :rvAK=m+Whr]Ϋt7#A R~7\V$͂GGvb1v ׼`=\ݐ1F @RU1)x#a JQ:%muy5w#IFԡGݲ _[ )ߵ@8J@7RJ7*f&\TR(X"Ah Z% -& lcL/EL"K<׷X݇V[!aY{?>l:Ԇz'{ޙm8F+KwulS&({6cL2Ao@lJHI)|oIg?fj5x.*L,Ddhb/4,Q !UeO'Kq𞊛$UrmRsAQ:Mp{4sAA}g{s2,Ѝ戃7e7`L* *E:o#W׹t>m6G{^|f2pc"~CRAvRZP.B}TZ?9!:ӝ\'_UMiDsXWszQp:H##vКQ'LXQ!W5?7!/i3+O}D@[ipRZ)#%d$he˚M%܎XE=oZ "={Ži'Rj$՞6զF[94('(:䶯ڟpZc āu UQF'[ [%D11&Q Q`l4dnt۳?&CmsM:캉tYg2f@7H-ӄpGYmjO@s'$HQ*m!a҉3Di1mSM)c' 'iǢ.-q(xmȽҟȄfFVb$yzPH5JФotZLy!'2k+uԉN4֡N%AkcLTgqDziǀz.<2b2wc($|N8L6A%Qs;՚jP%cf :4pe\ȕQi;-ރjA DwG ɼhm$e.:eepi;Hs^f)!jE@m%uYtwѡRDKL+Bku0y/tRlS- ɍ@y%C WyOVw9ZL2 -!8xCZi22 9PG%v=TU690$MhW66M9nѓtpQݰvvy }v{iT p9MK)Q>Zƃ]Z V 9*n7wS[`5l{],Up̏nv(>Veo$뢶^ Ӳ/E)0/dFNF;MɄPvhg\?7ʮ@`@vdk->ԁK؛6 $&ӿ쩭d.2U9&D%WiC V6\Ei {5lr(_HEoGÒN[=WI߆xNЃ#NVv<-.pdR[SXd?URvOvZqm8}nTi="- ;y(#ia^@4}Č(DN#RN>Mrt'^=墬ob+ː)Aft &𨁃D%12AUܽ},X0{jK:Zn]L}' JP3PSlaV܀Oj )\ldiE`o.oT닋4VflxK1ʸ7^ XYz+}JA"ȭ9Yard&}w[TkҬtBD./LlFJõL3‡eZ톴!PJ;fi}Q@,CJ[{R]!1= zlR(,}(Lg6Pg&zQ5eQC3u!y)CsVX]4~;vyKCJupJb3sM27no(Mu:9,^ A$!KF8޸ KgCAUAqUtNS, g`tL"0*D++7<&qKFyl7 ^JJp*c,XF$3\=|0vz}[qK'ie|96#P'Z0L350> $t͕V=a1<|/\5C.Džp!oG9P \ȏ;Q4jxh0lu#V N$zÒ+)BqSNL"ﶩx f`("[|8kZ_^ې_rnC x0\oF`@$TdUrU<>EO%#0rPx C:/^N@PZRdZm#9F4Fy:]$!5}?zhEi2thr{KHFa(D =>bv{/J"edR%3PA]m< F cH?;Q[#FQ(F1S'w΢bّ(0t= T k"7)iFhK˘TPҊH+3ډ@Wg涱23`=8+eT8@BBD:# jܩ;O2@!^bg5!0J[^D:[ՉEPh5,r]UB fUbVjK(0N12sTbVk32x,#{GTfFB ’;O*JrbŻwPEqF= ,}84+R>w-VK:L}QJ.eb>A~ra.Q.\Wn^]-C>/z51@M3vG/e5;u5q5QbPfVEu\;LaJQu:Mb@qɞ0f+ž"th\ڙJD U:u+F+w'uq3T@ǞIËjSߢ7eFD{4 "G1'u=.):6Q5DLi7lRdTA\3ܭuNSjhiOZEft!SzƅP 'yn1 =T_rH>OMMT^.ulpkejb'Uv>C|М7#BōS ub[ʼ7 au(Y Ɲ>g}gJ_쳾/z{׫3djYߖ&;2_y_lqoؼRRL@UKHkrh1-#" \yǘMR <WPG͵1(HaQyczPF!GJ2 04cGI44us>ℷ6tryrј@=N~XݜZ"̦X.t0Q蹡QY \cpKb,t ʂc}J8x)M+;ȅ<ˁӷ`c-XbfM@7FA/ = faX${ /'0jXIv/giEHlIӪf,bc4wOuuj2ю}t䘤6d6b9n&J)F!& ):!@9m}W">,3xe=Z F=5T;H powdT.15Ex0LK 4Om&iF)bpNO;՜r;FƌWTEIJ!JgRZrT@2w%)K MԎ3'z˕\e[,8BA\HUa~RFb =S1#Vzc ^ * & Ф, )"DEǘfB%N5hIGJu?)0b :oއLM{ 폑圙X~ ,cdsxȫI|X_eC}Տ_34F ku,yӓKq7$Q3qg{kZxXzCnC3x(IH{PcϱjVL5cctg=^(6}N2zd~FX|nrn=u) CBSmp:Iɤ29K0gè w$јՍF 5A 3S]q4U3jVtk8$wW%r>B } ;D‡.r>m~2k4_CdY#;b BᚾFF c :vnNRIIaEoeڀK_ |S~kNէ+cj=WCrq.b]5>aUanѾ*' {y'';/ƈM/ЫQU$\8z5kQׄ]#5Y3XSuu$s UWP+_'־9-2",<-{ϞXg_aU[YLM~ov>X:1ÞDo' ?,#mBUk@h/8uc6܍Z<(Qhcw2eLfcZ!4W^: [_Nwn<A 3u=R6*CX3 ؿ='k0y8JexQB9Vɤ6#t'7KzꔠŔ)殮?yJOE2-Ha6bT,S"OgWOrM c PCiNh*#7goN_WկdWw1Bpx^1ױ`-! qI/Ii-/&A8:tAs62lҋZPjЫ;-HOC/c.*Ji8_$H&w/MQNp:Ij9P-\R QDQ;۰tAHe(QĄ|Uڀ5 UjF"%5,(EZ_j:ۢZ5"|4 2I+Kܮh>g?,>Q/PgC֜_N?YRO$k8J_+RG\VN' X`{9t;?>~kx4f˼PY$h*ʺ+,:]1;{5MK7]HLQzSY c % רIE" -H)2bx 7#UV fj-JOc E!5{wjpCz12Z}~}u g!/~kC)ǔ!ΠI}m}|#y$Ī%n)ʁU=T[ (U#(/x3(_{tIJ>d H}h;QPPPtob+11TʥbI*?҆J ^v11ɋBˁm$ 8tݸPhڏLo' jm4h xw2%J`]/_ZKkgsۏy &Yﳐg!o y5>0i,A&q:ؤ=*L9!")kM(;l~sɿ_,$򅄇@}_dul\Z%yq<~ygـ4 SA`ؚ,J] PI &I)bR<(@fV&8d n t>#En辬wO[6}My! Z tvXg$5J:wNx4*jUT܄;R/PHu qә)ÉmYeX`SePa!!(P5HZ}VZ> EկfIm-(DBA FP¥LJE ^B%hfW>~d=e :Ų*jrWW7|wuU_|ܒO mӠb +c)g2IcSR:,$á!h$TFdQpA .b"E)+!.CːI'26"W Uȕ$$0t_%#[U0HU0*f>L^bs(*dEW6WBUפvu~.kqmPm(ohSR*ʴEyaN06#9ngvX ]Ė vc^2i}Jăg$&(gaZǀzbD8DT!TK`;j @Y)!"!Գ0"9u, Ia> SՒտj75(ά3[kLDr48 !P"II gZ9O}kXwNQm *E$NABKO7z 8ÓL( !2#5,{Pput,C# gOvrހoS _5>N]9,,^29R梇K,,B;Spb[1Q-;$rLrzd3+frY>F5#wc ksנ5 ~jRiThٛ96-9PJjƨ%cjXhL5D_f(N)fQq ʧ)UB %:f~$l@օH+bkb}%>ܠEοy/e}2˾ltVF}ha3?OvpQۙ:Bm4S5wi!%FnR|;ĢG,߮>POaHE%>ODb!D28ģڔ C*kԓ>4 zwW1O}yf8.0\\6o j~[/øu`r9hd:lݧO˼Ms@B {a2lT^@}^\QNA|':٠l2x,YY!ۃa2 #aQX /oH9#uF0/$f׵aՌ (f eL>8|;l.1B`N)I3\B+JS8l$^x ^; k"QH ig!+#S{٨icT۠/W\ASfaA|C0)(e!%>D s(8@ !4fNJR[&DQۆ`j ?c@PajүˆRzo S HDl50#1`j+##c<&2ڕ$Ufk<1#!uH{ ՠ+Tipm,bʴ /l\W ҋ76$*D/.Hȯg -E""j@q4Q9$9F<ԍ=hDA>?ߝjeN=T,~C5C+FZX^Dj+mXESrc0qeFzܕ&+P=Ί\=Ύhx+Ig)HUG]s&WXʁ 3Q7*ag W̩V$^)Mɦ0][oG+_aR})~Ueb"KDYjGԅ=jdA,TW]U}*9D+;ʋPyMm2qB˪ @f`jύ)Îb%#TaޠG/=8Aån }ew„;아+&ek@U8WY?)mQ7-..&Ru>RDz%}g2뾼FQx&pΘ2F1s84Ǯ1]#SS8VMOiT)Wcf g4)HPݖ/w[V?Nɭql9qڛ9JUdo/Jߜxd݊n#AԬsnCbZbr,;jL;+;ǹd{ώL)Ϳ4*+#[IH/ŞtWЇ^iR zۤ[ K  5ɭNmyǧ=̾g|YΚYХ%ݣᐫ߻N\{a*zJ@ Fճڌ&@Z!׵Kq5e0wCt׶ {Dwy,2jPm&[BNݼMbޙ.wus'L|2pGj̹EJCE\L# Bhn&ԋny;+:- Ѩtg&6esMlCox]Rv_4&W×Sfeo_>_֝{[ՂK18G{ PNr, h@)&,\b[!98˞HK0cg](!i2ٝC(J='ezR~ -:{՜kuYi7>_) Ȫ1Ai~s󏍋7˘4'5~MV7Ytx3^\6{΃Ö4 )XJ.XFMO'Ң!x"-}7% ~|Z\;+/:ۛ0ny{BHgzly;/]ݧ_/[ ED"STnScD TZq>v=l %޸$/ڄ[ؚ J$T ,d'6JhR~.@/jU u5"TڲQ3I Xc̍}bʓ)V上SrA0 6iSH8?<0!RUY{Ud.@$Ǎ9K+30#\9 9JR66T!UfZFp}TqalGXx KVYG?hX2#}TSjVNVbtvf%λ2k#2sΔ};wM&7qmDzͧ{UCj=X~3NRN81dO!sWӛ4@`Upin⚌J}5rΌ!v>Z){&=p]FWwK>'%38J@깉 +G]þp]~EfFMO :dKŽ JֱyէT[+( /3cGCob} \}xswFqjiY&0럦tj.:N+ky+!%HQ5Y JqvX1% d(>-^ط{c2ٽrbXXjEwɲFh6<)jp0cbJ)ZZB9^ueuѷFQ gZ#oM=)8=j%X'g2q4f QI "_HU+1)TB͜sݺn- RF-րkdh H.]2B[0mJ<Tkf0PM1rEiI %<3Pir'KZ Z)Zq>jhc++ )8-.Sd9 NRVc^3g9'MI#Цݟjir^S EHrmaۨIb+uL-hi3fC0@C87դ+';׊(V͹Z}QҭQs52扷O狪<좥8 a+zVoo^{g˫WKgwqAA(O,b{ϕmA0bѺwAcZЎ1O疌ynO恩ډiaCt5sRt}d`+!j?Ub0%kTh/{ 4RKY9SBZM#s%#`ނI %*,hJm2ȗPfͼ'a/rDp-oepB`*k/k%#T`̻V&΍*ymirȜ_orO3_2B yNqj<|U׼OD(i Mr"$ɐ4%#<<2sr̘j{-ѮWߋNKJ]N ͞{It (bk{8`+|6Zj;z=,)&9w׾MZ2vBJ޴[Shܗ< xoz% tIZ'ʷ״poI),TR28khm4}oZ UYXujܖc4Tm'^܃}:eS_@C"< [o~+#?bqǚxѷ2tqo;-` S:U*Yi%~ vvAv$3;}޾pØԈ =,BKkr;cRŕ R> BNO:PͺI&kCt 3#0]hcCg8t9*dI%ܰ ӹAJ~>寋_g?toVws>S' Dۢ=#߿Q?n>"S炄tK{HnA)WT*E% `jװ c2*!0%L݂( hqEMق< 8kS{ZNb`9< `ʆ=M!$A w:{a%dGƶ6ҜiKRy/;5۞E`Y{#<EP|w 2:~.UoАG9|-SU/ "pY?d#!s4^κ7sM.(h *pF%גUC@ .;u /,(sK$ ~J[ʶE.LhBnfZI5}YC+͸Sa{՛q|nz1'nMY]JrAVb?| r'??+bGͺR߮-^Lx+?^fۇ~g,Cɿ H΍Wfbqsxz.%hmxb r#n/.hCgovjøC.3CРs)Lq3=A/wɕNSN%}AC?~r=oy]ϛzt]flF6oYn B( K̘D&IZ<8XY+NNfBiqd6m$җ yq>lT꒍+Nr ln(RKRvRW߯D"!v*eQ 0Og}(G 1XڬorO/♣g_yhA?u߳H"&hngjY/mlzEBܘbif.'9ѥ+k4WFdύ A,U( (w*,D)*I>NQ?EVde'AhYUx/I}"* W"\CtntTXr\X$0XYD&X frq4ж~]oE4RtB !=wqn)Z`N(:ADइr,vID9y%Sh}+z= 2)]|b5B#2Pދ$7k*vDsDehd}~j"" ^fhĹAzaJ(k-36&R(M) ,0r3!EWn+rmb㢍]Zc'D&c߼6ۑ-X nGqE+H&qj~Ye9ޖ=6ꏌrK]|< g2z?4R7Nݯfva\9!9 e+;0=^e LQ_;zwZiQAw}jw1QW}y IQNX>44>&ntgE/\񄥅_ֿ7or ,wkN5pJTS%k.S!T9Xyv%aoz*CC.62 }Lsu>×ڞլmKR8|NRX{ڽW5$чEE Wj%v먃WRFz ]*_@r e?j&/ds4A;=5_7Dр;#hMA [qr+u+Ƹ9Xx:F/ `O3zvc\12&`+l18Ȅt:K~5~mg0Xƒln AL7L@2 N߶pw|9/ aɘ r <ί@I];bYk`*z";/,:P7;"1ԷNu $Of> :a/; #xpUab-Tṵx;N'pxN"%"ʙxXwg&^:]fq QjQ1KriB\rsA8cqHN'J$sT`;ΥqhN9KSYRƄtIH͇tFHAxS ZPOq} $R!}D #rKSpE"'4a8lĒ\98PJ''Dp1Ni,zT"MxGp1 |Gf}" h oIxwݐHnT.'Ŷ`  /{f_» ax .Ϧ|m_5CT%ܹ :ףhi1[Ts3,hqY_^|}^_z _+ի跋#75^v(Xth\EHpY:±8[?/S+[9 ucaU>٦*61el2Oa_8bc/7ɯnf>IB&fvF U}Dzk k3pDaHl ""V,2g5^Ds5AR`{#:A>!;#HBjEDwD'j7BcIҋ_ɔPC W+:'j>sE}W1-j =paoQ@sXCշEBB7z ;SSa=G7|j@#q 3h}v*T^ۯIE>aԜs kƊgy:$#33BGʲf2$LBpI4!49ӢmP˦5DX-.Ӛ4SrrńQKub㯧Vk1xިE>"#UMׂfs5 GF^abymˎ!Pr5+ ?̕}[6T-Y!9bNߊR|n~U|!j=08GJʷnk]-w?u2% y&dSl=X4-U1:t`*+-ںnn#[M4ŦVy7N+۸'n:1gn[Ck,hwkB^6)cd!(D9 %ZUC ֌Q!4SZjr b0 u.46s`񊁴%lL`2]:N*%bqp90̜ӖpB[D9G !#gX"O$cR3+A07(4FZxi9YNs n gׇQ&ZSxSεSF:Ij8#璃j}\H&dȭ5ƓQQ b6;YohpRcd 9Q8",;;|j\b R k# 8<l:7 ]$rך(֫}]?T}2j.&`_23Wԕ Pz3NWaɝůd~(bLV3X8j|\~-%J?R򏡔c|syԒIjCDlC[=)[OfgߜD`0ppxBkXwɥ^(ʁ&]HI]Q$o͟hGVM1+۵\irnzG.yyMql˞cTG"Y[:WV}έ$?!Q Qu?,Aj_uNW.dw%hrPUN0;/j>EYWosVq򜨋caRdUj?mˁE/yMv֌!|+70Rb|~C}t8)ؐpz`}q; gt)D"Վ:u妤Z^71:UM\vCZ8YKCTO$%|gZa&!Su5+IG8n3!*|/^}t)߁H )5۷5N(n~ /( 2+?"ʙ2P>,o j>!LI $!'Wj{Xf?ћ|`J~9 $ }$H=$E8 'mBR3sK`R m Sn!te:-sªP ɼcM,]Hu+%@j.šVsF򙷛X `߄ z}k'U[YBx+ūx=3s?64b~X A\V{!77ϓtMnkoE^8u*i|!]liluҰ5ZqhXRړBoT,,ϰ_.f𕿯˭:hM55Ǟ(;"cƜC9lԎ<6!AU݈ y`֏[ sntQo/oûƑTI/A ;Q0H{XjcWX򇺩/L&az>O.{+Kˎ:O]'H]H(g+@TA=;aDa:;S32`iԂWg(#%ӁA!=mx)DhK"LbG4HZt}Aⅵ 2e(-1N\r9U3EF6(7\7( ?gBJq4WY( s`be6zR2.Sz'n?ӳcH"%5ꑻ)v *opI&%!Ȼ T%ƛ/~*4bKcUg7FbO7EC最o>PuؖY9v_LSp?nsWd0"'&iXk 1'G\:_E1QB XRs0UXـYy[_R5^CԫCi7:Dm2DzJ;T~獕MS^});j]q);X*rPETID׉bjZjw; @[z^<[ݬ%x/EZ_@n^ nWv? Aji'"% &puO\_[;E9v0뎛#'IQ{Y-N5aڌK?F1R1"^B4Wa!lۛj99_Gaz/d=:(TbH$)O" ۨT6."fր 6ز-AE6G(ۢ9= JP8]xsR.%QJX?Ox= I a?x'4S~$!vsXblsYܛP.ѯe[㙟! =1DO⊬~[ñl%SG(6t` NƢsf$6t0NA7LXǗU4dzCnX6p Nj\ؤF1VN:/9D}FWz)Cډ^dp1̺ouk]$t 2pi߮c]/yj j2ӧ((EkX[hz $_E;U_G=}S$4{Ua,$cfGg9vTp'fOc{fZdԑW*1I3) mBvq^k7!GONf0YQw{?^3 Ϩݽw?~ߞ=iI`&4%˞fK]Rj:nK9zy28Ap?\ l 0;2%h/)0ddRa7pS; Id9ZK~T!= 1OP0u,J*Z3+ZsFۢ4oNTgss &ΫldTݟW˒MaفT RMUyefjB( lP*>^5ֵ#iCbvވzO˛VR<:iT%&I{^56>i5Ri7V'aHӾƽG_jKq/KC\G+$'MM mᥑ~H hҎ2Ӧ3RڇfRZh3T z;nǡ^FT:a݌Pe ) lnL)הW܀jېՌ7}5߶NOS۾xi'OɆ۲f=pZj<< sCP݅6?Bű`\w<}ƀzDpw6X=&o.|{xGNyE׆7Hb/iI?!3 ;;(OcPpzxTПM@z:a>T*XMH5l^ՅPo"OWhdYV[˒8aska3ҷ߷%J:JE)5#5}ʊ8ョyHu{ ƻuŰ8XxmgK澷?Zwչ/!Jq= Ȇ:ǯFuƚcÝ8 ;7dB83C;c4Jb-Qxzxd4`Nf= OOq3Q3LC[p4AVm:؆UK 94tX+n n`CT)`^R릔L_l@;αfrih`D-{>:եC8\^j\b7PFUcXE/?2޶~z&!wG SKH[↓ǛycEŌ̜'7JX~yf[Z"/Dr4D۝^1^pGp. ZwY5gVK vf+T"W :9lk[4*Wu"~!& x <Jܻ&fGbO HoL r&.4[Uåv)Wp)TDR?[3xޒC#`R)-ț`{2Cpd!e@AuZɜLd̚M#G5EױEbb:F *K*=f 9@\mMNjM”lU25##W^st~UYYWJ5gJ^0\dj(`ZUre:4^WO3kj<h `>VOot^r8aqzzR'Pr^xZvz,qzz/Ђ"cX aRkl{>RF׬a=LT=4ֻ$'t'NsI%XH%cdJinTgqyz)bq6KsHz4Kpfu*_â(qu C,)i"u:m\M>=9zb(W9u`b"N#Qlrlm 6L'%& iv;xY֨^RE>'O:s9B5IS[[icl{OUzh(d(ZS<#Hʭɐ4u-/3^pYߗh;6j֛j;8Ɋn=Gԉh3b&ypFv&&ګ:xF`sc~5U֭0[(0謠0hYU7^vq[3ˆG 'p/: c,iQ흨W9&|q mx$f\]'HDͥBI13a/Yɶ LiGّm:$J6lF&.LjzVF-H1ZPsY$ےYAln[#b:!ˋPÃSeRr0Y^涬m Z>;Q_)UkL&`ye3"2˅|2$mƛ>:m|M9guJke mGO'۲ð$ȶMxmh2ٶU4`{Hfuk'Cd&uX}&CtYgbɶhY ֯IءEL#R˽G_DF Gܰq/[VYZܰnk#м+"J9h\T?@T-<ɥH3ҷ߳:]5K^c"E)r;:(}}-YDRk0DMwL ۘ2b~k6[;iH#򒚦htz[N_>W:'}Qqcf{sP$O9Go[XԹn\n .(51,"0ۧ'G!a9u"06L"N^aV8$3$Q")"!7JTlTVQ$2R&nZV%aO:U-*ӻu}'^h=/iѡD`6P<>~~z>ߗ6XE?K@H"FlۨRX2z6ؽ vjDf#j1HE5^ ,tvԡ.j dPnͪpP_y_ؕ_BeoVk?G'#p1 p ) ko#4rZH>oaQ BgIo% [b8;,-ǰ IIHdzq"C`su(x׃uXPv, B]3Q4MH 5K/s&ث dcq" ʍZ9*͚˒fݝJj6|ROyﲮ kAt3܆eU#k}Oϕ\zUIekWeuUXp##kTt1\7(::!a62h&?& ~lB걧}AkyؼOc9i| –?H[:9V {ia.2\CR%oY}g#ԯ)X}zig/SQpꇪ|mJ?> ƪ c.hAJI|.^n>r,s"βU+:[{g7^OMhgjF ܚ>P=?VcD ,ߌ,}Db]5ѮΒRq0֎ے¢18{)YKJC,[p考 Ò|yf0ip@<&4V' r#0Hw1VȪ܉.+- 1 UeV[Y3r0N1HE YabUhkE\| il-Un5ʳ:_-@ƱPwlNk᥹z:vyFr2kvho7I-%JQ9!f \[6i&/K|]9BŜ1p'0GX~ʏsˁ׃b)օiy'ZнFGZv50WĒ|K+*\iV39! aZ`q9v)b= ߸z s(Ȍ:A6N*XI<*+Ojunkq\GFV9[7KsYqoo]FH4x0v|B-[&^a콘sz1 04C,w/{*QkbR5Ð]?_ 9y 6OG&>=z };TUkbIl5&}Nl//(>*FR˪ e鵪(ی2̪4ߍ?C.w~]# ]هR0vq yͻU룛eEG$ex-@4ОK̐%Al>\RKm\m5 ̩枭YQBy##&HDoMQ;{>t@ݖd)#M2z͡:  嵩K%jSflxk.0X+_$'F0^o7&۝NgI.PqW^hCSS|RHŔyJ:YE_X'P}:uqs`AwTC6$K0MGRK] IwuJs$Kxl{SN,FF_@/#70֥fDeI^5u-F:T'a|Yyl8yQ\])yfsЛ$z]D(Jn f4uhiUK[A좸 ȉ0[&ͱ/>L@èf;ے;ۭf; GowV ;IgF:trKÌ%iV(La:1:ݺ>f7Ն޵cٿ"bfV _`Y vf;tnLqd$w}IIKRI&EeU1ݱK{.K򜇹"c^ދ"QjIC[ ѧʡiFtpQ)jZ8jԪzk&:d`6a,Sf!h6.LEaE"֙f!'=#p;U'_qH=U\I$/"`ӂB#x*wn f hF*J Ɨ^j$'jګNJQrN2VZ@nd%cH% 8vC`pocQqa,{G'uK4 "&7V 9p %8u^0ֻnCtBMZ&gQr"cP%)IitoM vS>^?#N~E>}5[LOQM&}dbsafj;=[aŬ())JRbDk ˬ4>zL*42ìB1~SQ. 5:vH%cwQu%<5* 4R4 ߻vrAh$zi@t 0@=&2a)պa'.o?0Re/Go!p/mnf)L'HBh]gUa]_/Zoׁ/`3\( KQzd#ԓL< 8.jE'oxG6ꝼn5`Ƕreo% bG Ὤ+**u,BP\欮fDր9=K묻Vu"BW[50dbt:.rIZtn_wjYwO˨:=6x0}?y*S\L`o͗wr#-WɌSJ_f9[ϭƦg] 6>?zV?WumOr!S@F7NJatMet.јRnz[r!SL3M5|n> wnC"jjݼj;!{1e)4،2Fsrxl`4kB9VSkIGFP-nWX{XTWݿ;NV0xw3k3VaEv>mD6VMsXAxz-[l_lXپ{?P-/m29GԪo-?~2>K }h{AfHϦTED(B) (mJŎ˛VW(Ec*ËtIѯh$rc*B7Oo m=öU3z uAaBq$=6J2X UYS5` Ń1PLݧe=:2N$"jí|ӱPc]KC2訹v0p0͝l7w]C4pbMڳ! OY}}`5 Ϛ!J%Q9gݭlח/"vUHAHi{i=xDi%d9i!>DRZ5{XWFy2<^>E!AqoB $,p(H7| rЈh{ <QGyh0k_D!h;btqLGxt 7;z%-XR^ NHRvDV bҜ"jZMuæaJւU:}hؼ]&港#v2j/2IIe+ t.1pZ|ndw?րgz#^*_TsGIwb0T*iK:aaav ިIE{en rXDvXG_ aDT8-J6KJ8!k`n٧MH<e " 8PfΪBBr!2$NhA%_e&>iEbB\hʓ}3) W!^"cPy *ˬSD!GBa{(Ѫ,JeUTc' խv:)da.]iuzqS@ eJ˸10 `+]'=H9+OԜrV*<5J`t4BƋ:L`T.On):s9L-&y1f0$Zg;mu6k,d?pq-ŔGcb.s_JZ;Rvy`<:+gKRFsE즩UmֹŐI,$nF_$ꌑ4Дx / vdu LFhOYx<~*CpGOX^ Azu\/:񃺹@@zc;{vV{*ە@݁Y\V*+dZ{ UTAoU*O\wjt@vs~]ʌR"z\:"jc$Bilȋ I>1ֿtgxTaAMDefr )+9%EXѲz RBƈ5xh;\6 ryFr!S@ysQbB> wn]SLUͻ[r!S%qnzOatMetp1y7vt#B.%e [-*/7ʹù" pzjʼn9IϘTH,.Uc'ȪxOط! $RWPɽ{o(fcƐS(d^㵻p~?ǯheLU;ܕMқ/ z|O0mpv>Xbnh٥U-mk17fR(@"iVY?VRqI* Sy0V* ]` >p2(!S*)EsWun7S HLѥעA=@*.1s$W/gGAa[8$SR\7߷z\'yw UO}`Wq^2/TrMR <"ګ(W'zVQ|!ވCX а)ZބCHzIWJل^Ҟ(olzuhe$0cȣUJK:j^=wW4AOL"v,ch(MBnRPs5[2Z EsݷyQ ,hl3vMf:# Ze `$2Ѯ9)B9-i'ruUVNkF)L3hZ d9:Q'`cF+ vOJDD$:c|=4`&Uj0$TocBC1 ;ʞz˅9+٧gv9˛/4Ba~- ۏ2(.C](֘af#/pAc,w#C[/4B s498y݅d ~!]<y%‹/$I=N -?JssWPENH ͭ#P|-<@~Zt5. N -A*3c"5(Rw\&7I{{FGNld},:}Yd þD qDhDHtO}$ˤfts}/õ~k|`8)sj#(`'?ȓDgg0:MmObm3lɕrgG6bw$nj"&dI DZo<>vzp8*?D1剭jbDt_'$LM৬6Jxi'r]R(WnC􋟟ٜu$%ގ U1jHҬ5Y)zہ+zh:䠮MZ8 C9hnP?v X+7?JSSlT+}h9zmO>FfMb/K˽ ޭj\crF%D!_b3kiq{cl}u s2x(+>} 艍'\x(ydp$z0;wzH/TtiZkuàfWt}᧏rNdun^5Z}vsj/lGn}ӣMJV'Ir=r~gҏn?~ó4bR/NrNqjӐy=u0#p[=?nKzZVo^y(KZ6~o~qMz=nݤ:kbϕw ?O:l%gH(^ K?Wwc.nGڬt^T%"jΧL(\N%\X(+ڝ|ݸdcI+t!뮭-|s8n:PS,jۄ[VHyǥA &dr@)Po%J]D]ko#7+\.* /.$p70hL/)ɞ֛T[mju.38qljHR jah8c$:['?FiP d9PEPt!ٿY!b`+]զ:wyn,**seKd*"w4UH\NH|"^[KP^Dp~שkK-09r&YUU)UZUB6n%Іwv)A&9E+=dp5qĚ$նژf28Q*aCe @*y``{4fyP{b:ZwS K|5 F~cܿ,1wB@/Z 0J!"' @6\qP(敬q6+`S(DI8"+)EB!H gmI? dCty3C99wwx~u^M:`f^]f[Z9.e^(QaXOD*3ia`%"F3Ԡ9?2@zP8u3]LŠ4;Y N&*ۈ;n) 5/9Js! S/i;IZ;ghH7._φW7z>mGWT N !$|ot__3zheʃޑ[3q $;ЛHtZ7u\)y|hU#Mf'֊F&%+vs1vkʊ֥6v?%>TbT亞bSI\pD^ E#Ă%=MZӓgbo;́c~衇0ɽ1#<@a+K2$CTA%dvNuG_+LYa6Al\ca199'1Hd| l7.n3HSg3Spaqd8=CCu*T4nNtK̨;4HtP = & nH {r8$^Cxk qY;EY+Y 93kaLYɍB PD²T)!q4r ̥eTREZZYڼʺ?(kR(ڸ% PR.~l@8G8dqئ1,sFjOCIZ @C23IRQ?0YwM%*4v޳uOl_Zdyj[^8]7ԭŗSRp|l %gڶ-k0'-!<_#ݚէ6͑gh@9!IC59=jJw W?MVspSKDqjP//DQ lugFk$!ON&.6,8Uv:moS6]?V,0<B3^9I,5?8N`W0b2d_6%RGTpL2xI=8%K3شx=K&622 -tHTnh]\.QPX]"ܔXu^*VX/%/ZSIh] gۼi ir,] 6ڰ\yۂs# :Ȃ.rk*^X}'%[Û]`wt}&!\Rw.@7@Cqa^፫֌a+5i 7'hNӜ "h#4ovwcxyz1S#? &_];HALQ $'mx =sH@ZwgQ5uJGۺum'~a{xa'ޱZ#/W1.J5fl/=] |F/~UlG4FlTxg#D`#DbftX>oom>wx7ta~d\_8f k]b* a 5-8VSk*.]Xa>ww_ޅ[axLim b8-va8pny?{% Ȗ}W'?ykڏ+!_oO/s˵a(MD8Bê`&L߫&tKhfZyCC|HnQx]sfŭ].CgmNG %ll诒e]|ke62Ӆ>Y L[jN3 M #iHB ;bj[ثՕwHsxzɂD7Yo5'8q|"%`b*֔S MsԓJ>w{-BD{A)zHG]9o:$ṍg{f{[YA=EH tlʛO9D!kiKa W`3u۸:kPV} 5I+`] CkDW+M#O9:1c!\Z'1HqsaH 5[DÆ#!WV(A= )4(/Z6Le 8q=ȒÈF(j}E#b'jbImw~O+ŵ"5^.%`ayo>zwr`_۫I=Xav6ewE KT~{?Pi;ܟ\Mn׼G彑xz$҆]M>q>w w?/SRsv&_E{+]t5DSDǬ޶A5D"x8s+q6b+xQJzc)jRTCc鋎R T9CRg ZM-gCcA~WU$n]ʸ,^H MXUZM%'o#eSQf^$AJQ $dzy`+qs0kG?Z/v6x׽y M6Ttsz37*ND.G&Bbr7foګJK݄}L`(D1٩Hە2ݡkhɘn1H>򲬜0"w3aJOxt)ע=(; y^3QZrB6ͻ vPT`.'e ! B3gGuc5oYpSp3 K#ɠR׎-Ha+~qhFb2A:Y!p9qЌ\^\aLBlC({35ApJM;{O1hǥb8Bu91q!q\&J"vLC?. &!^YX{8f@e lǖ}\.]`]:˜ዌ߳ɲk-ng}qZ؇ßLO|fP>M9ȩ0t'cBbКmaHԖWL\1B]O?'+z66j>'m~ & .QyRQ4X>s`]rΪ?;HP(l]ۛu![e[nst)d UոT𲮘-)ʼ*jeY幨8Bi,9TjfoTgAQˤK3Og`26 A_g>r\6áR*H/$0ڢwfZI"Lj`5\fHw'39JI kFIj~*#:[J@s,ݻuSɝ MW-QkG |wojz;7_>nNQ*R, -`H$^<\M g4ϭ,M>9J65j[xgy(enϢv ҄qFv8R!A< .L!' 8C3D25NIÛ ̆eps%0Rr)ؐX ف25/xE$C C( ^ ړI Y2 vv`1ЋIt KDc)[]VT cŲҕ΍5W. RKi BI !K.L@dj6QJ|sYi?O~u/|.;fW|-Vl??2S)2)4vzk\ hU3Zs kTRqE]RsH9P@}J*N|ڔ 3DIz?`")R$qeth4({HufŋUUv(B?M ^ƃ,\\1KuLj[`0ig2;C=+LߣieH"4%GqEpXI鮠Osh)}+s(O#N-!.WޗJ}hŁ#.|uE'춣|3=BZ8EQ '=v!*C#h_ FO_;fyU#,"h3 YCS4 ܀;X3X370 $cgt;=YԿ [vb37s_cnn>˹ol>Nﮮjo%?l^ÎELGn deg lfa!"aԚ; _hX':3ϓ@1cp#@Nσ3?6O&|mT As<P.qqӝF!/m혯 w]_+:ߎbD DB<:6[?ˣ5!J>[&Pr"N"{t`)^~@)%/_M#EXȜ4b=Pf6FtࣨT9nD#Vj;Oh`CCz8eUi8zO ,2a5W > f3 /~c3y>5⯜7X^MVUlX;qOZF^O0L>zpFEQ3^}sߛj+Q_ü;1bʱ|KS- u:fn;:A~IO>}UTjS{H5\dZec_Ks3>,g݊0].gﮮc wR85^7 lD&[0vT΍}Xx_o2I:Ϳ^ۈ]cvSo9h6 ;Mb8ŠD$$y'z{߼"'x,K+v6, V`A^C>JU+GKnOG?܏_p,zs ~|ON66Jw9s&XS ʘ3H*%d$J&Y hUkT6L_7zѤ&y J!hC'#(=Z)x Ms&qtJ5K0ӘLi |hk` wf}AaQ|IS{ |5/k[\2:pٯXlT{7]?N?wi$R[5 ø>>;ƲJު]cNDuF;ujt('֩Q2\}qfPluj~S$کs$3Z8nP9ZJx%R)LqR . 0CTruK#duslA\バ7m|u)d+e=fbpRbqF#$;P (&7uTu%WŖNmC=uo IHfHpDx*8`Iʕ`"g[N4krW_WԪQd};2LavB`mJrNƂ:_R=\' )Xm9JƉ@1OK`z[ I].Q@ :L HI3e˨%(X}N rRB=xR a85T˝e,8Gq'OQƜd9R93xxs@l67r t"R 5N2qzK,Nϳ"J2#bsJ3Ks,<˵9qW+J5F"Xjs:OK57.l2[`$'ʟg.Q QRW*F+EZn(ꓹRJ @N3=ZlN![ߟ J-[4IsM>I!?K=i,&-e[o&[y%UT"mi{YvɆ0w'D,[QܭVNbNLǟk*YvF)/ g/kR-ּd[站;ڟޞl}k,&d62Cv`aX5^XҚNt2DZsg~s-z@)czN)wč4Ä@Gwþȧ)) ҮnrXE"![dJ㱀tQW[в;lÙα! `ԽM$o<mz%x Kyv:0H>Gg4Gt6RN ujďWkǓ"VϽi_ swz2aѭy.1hح7m@i֯ B7C[ё޽|fh&^r`q:]⋁`kAImn葶=sCefϿTٖ'[`3A#ѭud>gMx1ĦgeH<QF9Q煔A(AsڧK)e H8䈶iˁ#҂/ԧ5IbA;'d'r&'O`D5 fGo.Ip3 pdfM2G1EE^@&`DbJ4׎}Ct,7X(oGٿw3  T /?|\C1w8 a_` Jp0ыV }HRŅ>KĨ+~zw{VXIBO9?9ĻP"P%te8N20-MX @'(|Y·2ȕ.y*|_ `L7+&}G4V8|/@5X"n?/X%ǁD?, r|rpJǓU6'Sw*eg7krW_W7SZL з/Gr5,ύ0S1[[8aJev݁ gۇ˲[ rX~ f "ܝź!0݊Y8,>g3oÁKW> Z> a|O 4[qaԟŧZ<'`d+׈gw|ax[=9BRlE}\*.bڬ: J{VnQT ׉㝎V/HdžpHT&YȜ źM-peȨp cr%(bE( *%0,*7-poP-+н~0-hFJzZJQT^K_2RopZ7;T+%kzX编{>-}'T+{-}ZEb1bG tM|93 kv킓Hb`h(yu*:9 Nmzڱ:s7[~MJYEj):uDsa3LL*};(j\wP#Kd}cOQсotz=K=~| ZzL4j\WPz m)x0fn9v*PU Uٯho_M~D g%e rD >5ewbH>\ ъA-Υa6!2S c6LKm}q?-9SN<*Pj8'K7uU*%hf2I %PLX ("LʱY~^!Z\HMڮP-j_u8_ssIw?ds)/zմW?&3r* JhlFHSlufVJ5d AŹ21ZOЎc$)fFWͿ?-lD^OpMyԏ{%OZ"?%r7@>!L `2,ISAQFD$YA1$(\Q @R0^͇ oo'[p$em'z;ٶ\g+YmDlJ9YNO{L֓_, `蚓+[$HBn%2cQF"Ge q& 0`$G 5RBKvkk(o <.<ÐbHBwZ%Y$ qHbuV "`HCua(q(HY婈I" @FS;@ <af'4"pFB4X, 8 KiIJVH8"6΢X($ H 5kIVktҡ TSB ҬBq,G"$4k$Cj)[d<ڏP i x U ֆɀHq)P_ImWn4UX TY %d JߩQ7 )0` 3cӫ0$Rg,MD8x `4uAHY$ E1'22A̐gC^c+C sݺЊws#FDh(:;ŝ2 0VE䊩ٺd؈P\TQ׻sU(2 !?/9o^kKw_E`Ffd^z3 Yb}m{?Ur4]aZ/,iMڷpԚ]R` kJ38W1e_CFI~ȸ xdVOs`iťvZ(1>SNz*+n j0]+ɠ̽M[ [n16\ R-k2!I8h`Vڼ2 0gT %؝dwCh#P'wwO=XN!ua[Ƽlv7:NO}7i0}docῧY}rn)TH[6v)?9ɨX;9JyyιxJ@;RJ椾7N$ݖxd\gԥQXM.O/-58AN!5 ,@s̯`CIC [;FP-r=o|攡\fV]N5;'xz9 {Ԭo?{@iCIE>5'Tfar%zѸgTK %n\@JJOuM9Ï4)ϋryw}>8!y%\Ϧn{^''W2o 1:ܛo" @9qsl5 Vm HwnQQIPUQcbx|!&{L,C|7vfK+,aհJXb` '(MڝǏ̎Ýŏp  iwE8~t\,&RG)A'F3rC(T|e}(؀hZ r0;%l,Kmǃon`!@۝ (. {+s8C"ߺrFm@,QG NкTK*4W塣r]2*[>';a_>.񶗔R_ҊFYqX?=eHL27oO~79+7h;ǖrz)]m~lڛcATʣXOK2x~̓`k߄,9=tc6RfIBn\DC3\i;G֟ݥy*Egڭ}CfCBn\DȔ{L1KQDtDy`k־!v!!7.'2ekHt!&Y545WPkh,E@UT?Z;t9n>T(_Jk~Jd9G)Bl&k!BJ!6 x㣔[J;;]xDED`@C$PGS&OYAJ"aL qBAs+48\Y,!IXO K!X?>AZ?̓8]|d1t1{6W_ t΃%N UMmOe2<ȥf9q9pיf|IQ^Jcu7rCk>~z{8Tyx^t4}4>c/(v|A>_8רjH|NQ#u$AųԾL3/d.ye41a@C.ʩ<_pAE?k|ݜcI+|uݵ>Msz+uWnMݼ¸zX^!qxohҲ(=kU-v}5P 5wbڼӝs ū9JiIKkYmBTK լ'퇱'DT1e^Jw;Q͢/"JޠS˅u\p11sWR} [Otιש擺VTBoT+잩K0xj>>̲y]*KOރ+tR6F)9CxG 82۠NSpԏ-n{%JR;(twD ]\neݝDaA%˄U( ,%> KR[hCԸ!j$r{ ?0 H~K 0!<,)9M!hHy"e8B̠ ݓZjIPյZ\΍7dJ۔BEbOX K1S FG-x?f)qHe=w5e!)yXZƋO1|>C 8l%%e3:K7G/4})$^XK~]%iqiXB@YXyݨKȿwK:^BK%{{}SrOLis@ѿS}(/;%_`%͊m7X[yR`,AJІ~s2aثgg(E[3μ2f)A%nӿ3gÂz ` *^N(0xLc Y/xaݶG}t~CJ7;cdY}:Gb!G:lA4Fׅ -,9]Noٴ|޺+b2t <`B4 B(aI_&dMM{ƈԕM%\*K>?ցiƊчcIǗyj Tt/f(?%ϳ%h䁲BYݱ"w,VEJm7G\qĎAϏfāB/k-B}_}}1ϩ6bFCeOm6mVۅls!tyd\exrW0@7fġB/H7*^M\v nٰ&x| a] %QцD݁/fāB f=~L!$IzC#ܶ&O)dBRi:yю,(ցC8 AmA-GMmv$ZLiBFcS1%g %AHBi hfŦ~@7 ;S]aTǰ\ .KcpUs@|M0R+eijQ#:BMfgW[.=Mub#,g?b jـ/c jHi06G"H݌Y*G8SR׮a]͒C(n8^NKfr{#MRT0eq=xsiH΋…6щ9&6B.c&a5y8ezU@CYW@<3c @dCӾ8.de?.]\l<bU5L髂ZV0'r62cmCn~Z:Q#|%u5H֜1s;cWKahSS.\Zf }EI С2V/+Xw4v%ci9*9 Ӫ_S ]]ǎ`c>viMcue՞ж`kҗO FQS#_1\^l=kt,{k2q ]W;%[sJ $$zHBDlpmN F] j/9k99cG3"y,#29<=*ČYzYv>NgaD: XBh񌫐 we -O⫸QcsQ!=x>0"A  I3dIA@Wnt:κNUf~d_ $u$,hSaO"JOm߿j)v eXGOܞO1!R )㔫!)%o o6N;Cc[+0u Owr6O@}IK TZ`#1>ȋwn(WPΡgF'pe0iHzrIcmA?1Y@ԿN8aҗi\wv)sn^{?\߿r`/o:cd&`yo[o2>%h+RYo10n/d5ַI3뀏^zvY 1ᘢ Ŕq!ǍB] 1v8k>:.&/y1/V]5}sCa[fivpOnɭ( 1qAX-6o50@5~KcGoN3$|c$Wnˡ$—HZe1٘  F?9?$߬[CK.%=^o &; H"iNJl_&\\A#YB/9QUpH''/zzz,n$J&){`V5usP"kAⰧꪮ*d> DeoJR.Ʌii<,m,ןo^ §0>a^yۋW]|{@O{+&}6V!|uPi*1xr357tc. 9 wɿ*B(D*H3r(b O8L||U]l8n⮨_]Rl4w֥̎~("?QsgrV+=Ap. Yq[h vp$Cfw YJyƳ\=Ƅ˘Ѯfr9ˤܩ:TBIzZ dGZ 7K.Oi@&LK(L`IviM 'McVNs<9H/$79!N9،32=eIDG-皿k׈= '﹢o3&oUknRN҈SDQ"A X6c^T+sdpB)=HOn:EYC<g< mIIRhֆxK29uddYìUB*0X&~a^q|7.Ml?^,j]|i\_<$cL*dbL*d1~} Sq>Glͻ_wס`~ $X|'gF)61.heF]DE(VZ* mq7yVҺ|Lh\K:;"RIR+4ipWQؘ'Yڂ@UΌ{3z6I/PIʤDgF)܍Uz!HHKMcZ Zg/SR y@X+NWRn>T#Q)IAOD2rZ2".ˤfD`k&Cȼ3 _|>C&$K{FhphY ?Xg29}n;.MZЧ =ꬑvpdo׽tCp#բkG}X)x3@B5]{` ;& zt8ۊG֏H,I2^ >8+OvkYza24$W>@s@Re*ńR3"<Q֮ͪj]>'I L<0/cGoKr;bFeδV`E+11 50ӁFwQnj^J!>ߍɌvbu^&2/&o?E,B"_VŌ)˙ˆr?<)zB$ Uy3v(RGSGC]-u<[˖8L 7~㿲^ic._GO'6qiA&9|OASKW-JJc Frnǁ hFrK͈iޒJ4Q."?zp7h9_Et<3AkJV4 ekV4 @eO#_{P5-Mo/8n}h H|s&p[!(~ڬ%KUFis#-zk%Zb4XR ڈ5=6dTʀ_H ލJEՇ/QrٖHm;SҶS!hjCPKZZ]^T}]T[~>^B.Q(ʎ# cDXкhuO/ꢕ3uJ=\.ڜU(9imnebYW|.'0.&=֊ʱ:*]s |eV7m#5f!{`ջn j瑱Qly(cn2zNi #vzGYMgW YjϮ N.vٚRk7z]Ľ[#eE*?)"^ kPDqgVy2\zʒAslwz=aT#Xj"]-Vhڤ(H-i]wUh>m\xv  XG*wfɛ.7AwUe/s>n2MI+!x>fi>&wW.>_/b%uKuVvFV%JG,{( G>qr3Lzcud(506_WhS ِ*\l[C|. hۦC|Lp!؍|b3ψDU__^G!~ HGxgĶgWyFh#k6oV#>hbnU!fH_#Uk)RqayكfS=\ւC[ |Lgs> >#:X=|t>Bڍ8W$ o)gg?<pB!5+lm߼8;%hZDM:Ya+MYb2tI`kavW@`keeUqV_3eKKa+?QvX(^ n_l8.|Ⱦl5@RX? B3e[CNcIf -Bۍ*طK/WۋPUAG!zWK4?'uԩR0=+=™L\=Zα/^fn:,*^ ˷/6^+%9_Wms#Gr&sW|vS=h:] fHf$ iI?IGr5U,WQ\5bpLԃM=|>{[_7̒;?\5e ^4+OZHkdqZ$�.j/]^*/l~ #?,*2^; eƌ2NyfW6 T!(2Ad25N 686ҵö:Ze,2hJ~ZXbTZϪuʄA3F]&} `ZK& B1 Îh #$Vu.xFA LgKgZr4J*$Z\f\YVf2v`j^؜{C{Z(`o./I W3Ace)r@W`y@;pv &(= &HZv!,X㙦L"OA dC"3^1.>CcvoWh]^}{bT"Kh))Mტb0ǔ2P[OI)%a㴱qm1q.1J df̤`&' @K^f 2 CQnRP,Pp$BJ6n6AK\m0ՙR^ǰjG bԂz &`L6aPk/E l,a#ݏ^^suLO3vSnAф|ae^V҄$gԇo=~Qł_}nƳoᝇ/1@x𴋐"o`Ng󥈼]Qg%,o.Igg "BID72WKyr .r}Ls1{%=8]l'Dq/mʿt Ηr@Qõ7Q{~-t*0!n{z^Aԋ"O Cnx9xt>Xy`GC7}rI d1 RvOf*Ɯ0Fjd)~TS)X9>``0|uR9JZ;% ]- \ R\3!xߗf)>!NHm)PYZN)9>Ҕꛏfh ;=#=Wa#z{JKQo#LhR-Wk uU(֑*R1Qd'`}5 ׁM 6G/̯&؜7_ֶ㨱^#Q7`9:j5?_Ws]ZaAK}(LZBX\MXȯ[M)Ҵf3Dz@=pzm,O"R! ^/x<ˣEvMpߛLoGڗ {1t(h_|27}M+ͲGJw hr>M4(k+1T2pɀ >xω,&O.| ˌwBVY!Qv oxlZ1АJb2#&_WT fOfP}dIF)T(%Nq a0b:)/jEVAvvVCU9APfV`NV[RZQ5:cHGpl4 ĦNT9姨Ɠ0 ]nq<"z:XY0$L=9eRK-Wk^k~uL^,g++eojA V$?s&)(l[ ݄b\-Wo)^՛=HTZ& :3@7 U6JJӁ*YkmB%KB%*䁙?sL`ż IJL.@ a 9#^{,婾ڤZ+^?]E(b Xmheo~of+cF Ctc-Wp@d OsnHɻ?zzSzlW碷ovK0i'580XOÃ< _|yHN7JJ:Lk{BsF&4lƭ?a(_fW?oo?sb(,8 }==׸"7?9yȘ ;p z՘He`ukUJ^HȆ9WbZl`<4?9qX޼g-( %M&5Z9; u gRX2?=ȕ"1mcT6VT  s𥉉_.߯⏸^~FCW,(J@?:\ dؚ³ruAZT;m5LdptOq<r?GJ*1eҺHvt[1m$[iM9fɝ<^dY2'G;,h]ՖK@"[efN& ~NL?Ø!dۿ|4a 2+k]5WIyύgs7*P /0D˲@y˲@?Sgyu]Ŗp MsE7c.uV{x eQ 嚧M_7,13x<-/a#ߙrXo n'|ph)m\h=Aq~4> ]2. 6X2=}7b!eIfZh9p3drq׬_ejł5iA:nAȗnThgꆐq).+ )HʢͩڎSVư^'WXZBڜ>yE)';a!Z:jY6X A:`[{lO-Zw=u}RU&ʦa/Φǎk)^C!WCée[/XR1p7ϱsJUPqnuSr7s}PBY28іKWu5:zk:%GRELVͲ-Z6>SZO#4h4eos筛?'*K!Ys/vm8/w{ٷ|:ǖIJHNRú_濫5_j\~گ4כ/o*$%fެfGf(#B )̂մ}MCS>kbBVefF{A3ψ@ 9J:-&\qeN Uw4JN )TBJl2hlqx&qH&r{p`.&GOo>uH(054Ck8l9ڵA88n6KD9Q[e',,}e+1N2$)QdR!~%=]ެ_ ufz6ĚB%<$uN#ܡl)B61LY-3>WgI4IDNӽA~%;5Soi[VA 0*@[ꏍd2PMA '0>(= ,f(O[$a$aʔeISθf[>U '6G"=u޲HSɮlEc rR3`ۗ ~<VsZfQWT"ֳ݆wK?;豝A#wه ]ob;u7kb^QP*4o)T*r:/] ,+KKVMٻƍdW&~1sd'$ٗ +k[^Id?Ք,Qe5ټ)V`c" % mc??rU3̚P Zwe$䕋hL{ᄃ(E- }GYi.i QƠ`dL=FH/!m(c]$~eii[[La+lSk5k)ٟ,`JCgy'2ձNAQ`" XLNA"BRĕxEE40A LAdt48&(AŁ8f$$#f͓xFhERzRh K$b,S%QI*rIR+$Mc앩-6XaO -A,*A^HH!M -שA Q Z-9 SMay3 t@b$"ːsćG`%G5bkpZ6UJvJ7mˊnIf[E`Hfrϛ^ ~>vb`GX5wBG.5hk`KUN h58)lv+͢#nA0(F83!VDF' )df.K`D[+AP(<. RwFJ,ũMC{N\Д" ˔"828Q8FܵO4B%!AasI[9E ݮʱ¼FM .7\rƍAmVR5vW}ę0ݜTKllN[HD#4iوV7uG9o{-F4p?q{sǪ߶߮߶͇mo^R7.-tϸ:6[譬庣h #޸`[/~i.Z\\9TǖA:xkTjB+ګ9qx<#ghK onTHFd=?,C`ۍ.:5f5`QpH88]a4Ka)aC(.tK/`4|ihH/X{w{:`msKHp|@|+=d%| ׽>bkC^G|1oHlO5Ju}DZbZս2>kismvsd&Goȕz=Y1TzV&_$I55.2N&2_oY;4 5H1|͡ G݊f|k몮`$}baq\sqsuyyw ,&f0iS:[L e9>ϺBJPlePAEPź"ITs !dcVa 8XvAxlMh]a|zdhR|Lv,`7p2:8kkF5Pi%"IKU 2IFr׺ ,Na*\$jFF;%ːEjY٪YQw%ŹB{{ACқ~~oA+瓏Yۮư \) VPgW@Xص] ;S\9Uu,L9C"D'3Fo]9˂G!LP3^]=W9`̨YP,DOg\paÄlPEe}#zTu̘:K^¨cN4f+k299̥Hu+r7MVT_o\|^Y, ggt6Lcw;P7؂5PRAP:[S&$^Pbc N$D:Z /V *p o7 # &GOtmTc4es!58fR3˰PszZ&5ֹ%mwM Y3[ջw ]<͗ɪ5}3{L?3<jgyÊ0Odtg@O/\LIeP > /wvnғ}%n@f 1/$\RCiMٍqRsdݫ5aq_<<1KFn0beBx'ݞMްrU& y"$Sj}wn:oC脾v.4Wsi޼S!!\DdJGi7ED?[ ^Rڭ-)h\n5H+ "XM[(>2ڭMIi2OVr )\5?4:̹1ըNoN@}zjU/@ w?S%ڳS|$䕋hLaTuvÔBb#:(ݺ\g1uiZzj:$䕋hL urvpG8u:TTd|/O9 5!|&Hr;/MNf;׋egS]ͩ16bdSFX26FB#I)t5y绐z<9Llÿip_gLj*/}/h._[>+z)=E\!tU^ϋ"[* d}.h[q.۱yѮKȋx}TUiHz?XޖDW Dat`"Xv:sE g%nW9t5jJz 4R4w]՜Оή EV.9]Xq U ճVa^3`R=-Q2_[&)=[y>uI2ȃLr~|96e9&>oX4%&Cٿ#L S]gz#KT+H͔I3:[ =Z/ J@Gf-gqĩF-R1eXx2+qѦgsn yzP`D\oȁwS$C} +DO}TIpk"v.Pݣ|1z~<}iY3\As!WPTx!䆎k[?g!?5G};[~4o)NZs+܋ {8Wv$Vԗ3װbuDۀtϔ"8y+*Un'JPU K*׶vIc|.De{Hn}X v\K4½;Ms¦+#9pNܸ8qX=Tt*$2D2dMJS +ǥRP)"J>rpjdezVB_t9XUcq?*6[[yfb_^YlO84x9x:FT+Sb}LHU 4e"Mb4ISxH"A EŖDj7{f΅5B_fB\)45L,4 FB0,L9S 76XEibh1R }HZDTKi&J0UcMS Wj,^h1:3L0Bm23'EF>$o3ce!0]_=0OKO E%wwE<{ͮ4U1BxۿDn|?flXXWCG'hf0gǧXDKk@ "å(<0wW>)/|,1=_ ! LZ0ᇫˏ>ٲ} = Y'bNJus`Xfac#K A޵q,Be7  sƈOD*Y9TϕC `Hdk*$bV+N-Y vԀDKxZCoCmIdaC5ӎp5oKDPLʄV4AKF% AdE֖K`}W5?!b(;#\)/PQa0MߺEs}{p6qQUUىl :qȐģ20jFpE"Y!W/R` 0e@:H-`B(SN5iH \ҋ|VoI]acKՓ8[:bm$@!wƛ .:wi_IN4'P"FN P<>)"e\  SW|L'@;,2% 7}y}c1.?OЩd/'0օDPn > F1\\2t=r %ƆC|dgdNyS'RB*&R66B9BBI&X tO[zö^._F)7L_wfQ Wc̐sޣ2;F8hӫ_QV]%Ȕ5\T #ҲmZ&4M̎ŹLx1e]uoa:U79]{[5Ƒ?L'?ev?}/+q?槙]>95y%Jg^`HͼVz%˞WO/>Ĥ|]sjC!Jf:804 @nD5—s*o6S=R02+ O8*$-,<t#֙ m kۋ - _we}5ya^.A~k"<'&֘ Hi{#tmr\+ F߃f/ w8hpRPbأS$%(dEՎ FQN J1j7fKO?%jꮓdSb ne؜j#CAtFŽ4&L@Y?~(Bw+uYk>ٺ単Hb+w.O~ &@뷿^?tllMÏ seҗ9G^@5nd$MUAĉ賟YT.;ŲA;)GMD$f7ْ!s s\f֐9f)ĔK8_IP@)1Kp h RZț'r!Q 5]?/e 2{{M"gpx/z l % ⟕lTCz\,;ߌ6^}t$IǙ[%Qr_,i,„pZ lxU.PR*%c]DK<- -pd*)!@%͉Q1hKP2 (nTajSU0zVX )=Duj.8:KIK)fqRQH)fqR0xғRltcJ)xIu\ťBR. >#il9BGB]6D B) YL锕D{ø)@Hz4ǷF+riGQcmԤ JiWsh>*Kq(+Tcܔ~AT ͡MiB#k0)$N*Xb! 154d !YaV3bN_}MlǠs `wMؼV~5;O?}P%*% {~^A@{-tPw~鬙jrci߷G8r 9$74@p+@!MlusFsC#BJTrJ4O)6㌡J J"P ,JK4Bc~^/ 0`Ƕ~Qн05B)B:R˜p bLXsÊ2aJ֭PNMO3tWQ=o=?#2S?5֩՛E:?]W"G7l:h CHҘ i_ WFHlH0AnwɐHQ ):m=j[ Te\X$,8ob-2n lszÊ+H % y@)![ \x#yN7JboeK7v&ۡ(ҥ1/S `m)/KhHoH#_XM5#)5W~ M'Pm9 40n aGOgYUdݝR^%6p[%`4C~bnۇ9 _?^[ 5瓛ZP{Z0ˋ &BJZoL`&BI#VM|죬ĉC6 8GV="h N-)J.u~0Rq sɉhb%J {`K!+4ħ ΁r9a?* =c|s>Tb@@5s6qP)zsʤY䵺j8ᜈʸ52O0$D)Đ]EbHME:7)%S"*8 +ۙǶ_]i s2NDd΅ "mzo \uA~(!~m _^V&Ӫ \qe{[sg{yr|0<_~tf6Ͽŏ)?яl9^:,@[/cc#*|2hxRfھUm`o+ƍ@ o\_H 0Tz0A*ٷlO!vnP锣J.@k4ܦܛ>H;6EFwMpH$8dQ<^Xv5M< ?dm/V;e%9 P I]t;\{>O2\p6ˢ߅\VPMnJۉͬ񍛧2~5z57kr,^]^m6+jJpb1EANĵjLS*xʯqNB@q׭!JIh4,P*HⵃL !l pkȘ'm YoYeM" X&&G-bǩW3F9EяPfzh*}k?\QW73p Bǰ 7V &.t|Gу~2 ]HiH"hdJmV 2]aOC6jZH;@jǪ>SU  s֛_ے;u!dY1nnՅЬJk7 ZjN5aL#!W@ٰ_2ͩBV 72t&ޡӽۆ s ߼kΎ%nnе2+jb_ٹ&? ݵ5gm?lye}ApGzXFRJClc}~LovL#ci m͆r{*2O0HBsm%SwRtGnm1﨣ݎg{yB[jڭ ELQ2zyGA [[ bD;h9=[&ڭ E$S,RT}Y-3·2bPE䡘 i؋8G`2Ca$62'`o,%S, vD5qTPa^*|f?W6^w>VyVGMc|#//~5 WOx\quWW)WKq+d'#6za\Z6l% 2 lzOdmPWO%w9wjoWn5E^|0n(':5>gv<$?>?o{]&Bͯ+|2~ EiV3೭@;{rKL^흗0eGrŒrn}+0~ 4r"++7 r/_|3[Yj:9. `Yρ J{CىUW&*O_&> +.HE5G a&iosI˒w?ea`3|q9ыm*ы8K1z9zї8Gos|ٯKN/mg-/Ǭywg\ŵx0k()ȬrZ< f?9`_Yydopѫ'WA?)k&ڐߪzmySD^^_'CCgy㵟 A-3XF֜{vK6=8c6@ !^e詂iTY}ɯz [+T ASztST :Ta!qdSNLػsnCetw{tmx!ZG6MtM;wPt1^1&O݆ưmS$9F-=?1ڥ"ٛw)Q:Gfu-v7ll7?7J6TWHV){z kFZ|ofndEsɥVLA) PUR V*=t!+0c ϻn>@8m>obn љg%6v'Ѻ@))p9<Cozث=Nt87%JiAcL?]l^Amtt)=2Gۨ^D. /-v%Ʒ{(Ԙzwlvb7̈6fC &x )s$0rc d m&$CH~w7JT? <۱U(5( miāH,iM dSaBD~!&N +@E&;Ɛ1y67T4{|Ԟm|x5j|,tZ3fb,n__ /aŃVa(mI6,Ce<{f굝]^U(󕙀zo5YCi߿-x..o]+Ud[ K ~sK~:y2\ܬ~AψV< RDdQ& H!ш3;&ntٕVޥqʽ޲<Ju8'AGD: s=dH'[B~.}AO\h1lS=vlăӧLwiteBgSm@P%ѐn4Z;HNY)G!?J]G je?[-k}UK9wk1r q2^I)\-z/f:ޢPQ QP$Tp6iAN_G`('B-旫栅u&;/~IQQ0lpbtq,gCHk6 OZ{I'f1d W_Xd"mjA2PʙbtTMyH?88]kEɋ+[x/p%YXKPcw($X ]jxd<;tq%åh2OʼnRp^cU.\`o3Gw}s{Yٰ[>D0! ctE;Mrwhl@ YznN3B16 s$v(Qlttp"%|s\KXNc<ղAW6 1w)Y.Km{|u[k:Ph9i ̜xexj5rJ[CW[O<rd9>#LChdICh,TⱫ`B)ֺikxyh0*ڹPE͍ݣY zr9Ř"%*!j`l`GYW= r1v4qr"IG*W$,1!<Mh\Cjo#O7?nsl @ofAsE)Et``9-ph.u>/oE:eeնZBtТ`IkJhL{7>XB`Pi$1p%v* BC-"ZEP]kylA{ag ɱKO3ٚr!EAFNTT'Yz 5*˴bg 8GP$GNs%G p`-;9}`|;rV8/ݖxAc଩dvwIHMZJ0Phi"p}9B D7qˇOjuSvy">Z! Fhr1o3VdE/T(愔)eēwwLh4x-]7{V'W+ awF^ݽ.w˖GJ¥oT@To1:֌̅O=ye>ANBY:%+KrIKJGs\ "ρ_2JA9A 'J;7ER!u ӔW9Z*fǐDSôpތ]_*l5j4:YR+8f3-L>PAD{jmQXD˿S SХgGC1'C^Mu)WZʥJ ʈ[pazme NC+xky"(a{0\1)?,<يY2AET4}Iܑgpm$o %iN`[HN#\YxL.t>=}m|χmo g$pGb5šM7 ͵<'\W>M+W?hޕ6r$Be!^̎a=A!OERdpoD(5EUE5:򋈌+##s*nS׏S$R 1уbeY&QR+'hˊY/`1DKDج ܽ">3.CEk$4=]AVǮ"\[q"HFez,MuwmipTWQ2Js' 94'9YçwvƆ;]^Mo"L&Uف2kK!4*eSY<[`iH7,nGu;.+^{#1*bd&$,C_OOpgxg=[(vdIFȍ+-YGX;N8|L˾͒cPa̡|1a@A]es¨Z+ye*lr S'0wlT a7wflUw-r}=#G{wǨT 2fk>,C5d""6\,2+1]v➫T1([D{,2 =ă<;NiT[N-krM?D\uQj-cɖ% X`2 IEE+Zeeub1k%+ BMJ a,&{ ҧa3rqyد- >KW ,iIA@"ulW- \"W'챍IUh*"q"a#' ZO.p f3$a@jMrz" [jJAE#9<֡"^q7aX¦ne'/vdA 9xlgU2 i1s⠵,NҀi[k.nAI^혵 FE:T6**/H($"d<@ }R)z_0сugJ8{.axoJQ]ɠ(Dqo2̮vy1:`o^(=X"ݡRҀ!O`$szJ/zo6aiZH.Rg#"[ W5Ia z}?˾ ]9mt n2*%pr` E Zq(JKO5+UsI0냒ZVTiɃ7XĞpo'^0[T }p hRQh,JYWE|!iW,3Co΀5 7dS$N*eIREA1@!L4aoh#Vdt U҆jѡ^rLF؄ڎemjN=Zÿ꾰 ղ`T(nQWxfzIMs@#AFܓ}f;Xo5uTVgM)(Ess~Syk՘ >ÔR i2}gK7w|+D]wB;eLIDz"Ȥe;VmеxR'(3|SnCD>JBsέruU ;C{st7WL^.&]VT2$r0# !{Ϝ۟Y$~KBY'GSA<؅/$B}QBRΤ?%6 D["؛*"VC $@x(HKqᷬQX3M:_iW[~i%(%™2^qv '01vG&Eu婎""c\SG'5f8ϧE=mkz{|^N) xDA`] œ$P u{ުNgW)[RLlI-qh,6^ X߭1X<^|$ϳ*CT>6G8D5hyo(/n;VG_?ɜU#)^ DQsJM_<ޜ:΃:-ҩ ϳ*Sei'C$!Eh+?,^a>& 5ثS71XZw8PM aT[_dzW{im=$nx:j IT|;G[+u,ēVcG9[Hw>!i~Cn[:7Q4 Z >[0g8*},V|q٧߾ﻠpR@u)ϟh^c.>ccUTVO㧿:VMtgPt輊G  z#GɧAY<ľ4'v-Ehy##| x455T#G~ӈ鯭fg3?QZFЦi%c^,8^_l>=Վ/IVDD -d9=%vw1"‰~O3-Vlilw7ǃ']xt4Y~ģ,on~k{+Ğ>dE~my8@^9 qY+ڨClR˦Sx>~7ᇏ~F{+qh9RS-1-˧'z6gg8Vh޸A?[#Q^O_59~(?MQ|8zPځfc|svoD6t|?eVa@ZbF>~ozTWBS@K}G,`r"PAsِEDR3ɘ\t;Xw,-Vt{3%݅~)]R:l>%3 &\lk˖*K_}qa_2 w %2oE:F]*֎{FkhggW.~zx 8u\3*ٱf`ޯ:Ӻ:hy:=GJޫUO"Y[_ܩJseԜtvBrk)Oj7'y${ ǜ+ 11/*QFS JU,z)-@3"KĆIak\p4onXj(זZ/X1~`Q?:z\Oo8P 'OIG*$`h[,^ ;5ם@X6jƮbI(-m=+LRM(+#lm)ahnI^Қt*szh%*9JVJ֮m9{߮&C=v=`tPtTtv[o B;v=` @n`v=ݰE;i ͻ߮xz:ABNާ:v=4e#ICNl^q,'SZ):dGqR-JA'BZ/Y qΧcI KJԂfh)Ԇ1qG-hAŚ-a qK7BRjaT.I+t5%J$bNlUJ'7`AZ{bu`!ЩmĞӜUdpv~iɷn B#JrfIOENhޢ]Jt]/kD #e$-=ٷ˓u3@ǎ2ؑN7\ >vqlһ.NɖZc*n!Cs!_ŵ|j {CzW f :d*j j Q0TƸ֑y ثkfmW-9[gK.o3 ^zwG0j3viͱqg g~c-^"ZśW-,/}SAN0|x]hoNm#8_\etq q-hGAzJ 7Cl5ߥ-MID׎ aCF"zIeHB7G/aIĮd}d!-%p %G hS YYXiHSJaަk~5_Z_z#pVcB.R~&<ί2.X3p3w0MgŻUӵ0|?+4De=>]jIB GlQP͌GNj [5)U%jKgw;Dopt7aA' +}ޑÊNMsz~Ykiֽ 74N|c|)]Q?mW JE g 7z |霛?Oydz_K^)G_])f Wxg_~?ށmj_G0JO $9~p܊Ϟ068}(*Ć{$" iBʨ3E٘Y n8ʜ|& fh^*luvoMΤY]5BRzR*U;)%v)O6u^ ZJ)(i&>}L.ϔ">hM>,чV?i+pvŖk}"S6%(l+2E/Hu2ՐvFj eH7|KuΚF>v I201.@~+y&+ͪiոfg q-=!h?@+̱3W8k<L#yKU7\#ÏP/WԚ+YM+n_.EU_PKnj=} %iiAȝ'r8Lm6Ҵ|G( e\2[nsFGk*&$Zdm\ҙ`T{ }"ܳzPF{փ!rؘ<j>KDр&-6HBb&r$r\ (ECoG:MY_[?k$BEN)q2V@y~gI>_i*1rJhz!!IRrAHQX)G dJ=^fjʡi.TnokUnG8t; LPj䬰uhڔ RbB Cg$B-F1:ýZF^P8AC먺veA)6kǛ:yh[_JY&Z.oi@J/N,#G tJ.\nVgI:ei_:^6]5('UOFD1ݨgu{/W;]u5!1k^|"w)vz'̢%E.88ˑytȿ =B|)c48GF/JmhN\LJ i)#X./>+U-n?v5` ?0cyAAȖ!#< X[-vG2u^o`cLΙL&NHIDf3*㽥ɚzC7NK]79Hj.4<9 vV 3;kw4R0"׉ cxq XZ$CgǽO޶oGu)dҵO`IbO \IiP)L:`%xGnm | %s ]U, %󢋷e&; ?u69PP ˮP-db$QBJL{r{7%m4)QNAzBrd8OUޭ![O~ rHاsU^;2 yB >ZDm-Fvgc}֑`bedW-s@mt:BDg8[!kD" 5TYCơaWFk5ƿ~|pV}00fX-w 0d^ %QʃiaUgE8y\EVeG55<"u.۟ʢH6DC\ Ӌ2ψjЂ`#w}aAXO>Ed&[nV֣ʭ^^Ƨу3.hXŸv 4+&Cςs)LώpAkџeˋM9{oc'5xz':(ޕ$Biz1Eyؙmc=y(c0}#\$U,XˈȈơ+ 6OC"˱H)pHS3X>b U-6JkC+Iq?ڰ|kRRx;Z!$:3Sׅ8=@&k 2G{s3:?U&ļH'F( M&qT|m_.ǽ-h ͷDIDȇ )\ "`P넁wtmOqJ$s\a!3te]odm~q,';ˑ$gNt9<Jt Q,EN8"τKBՔΪ&O QG|BߎR  01e_pIQ%m>OK>'':dJe5tb,$un齒H)!)N\m4>@$Gh Xg@G2ZS\hf  K-/,^d+ h ݫCWlR`.<מBDP(#Q6H%N֢D >T~. yPrr1P ۨ[:00g`Tk4tՖަffK`B,&i`XW4b&()QjpL}"5(%K^ǼC 񷄾!$ #j!&㷣t/QׁRtZPebgGN֦ R.<2$+PDlΛ3PC$5(eL&%[ЇOLyK `DqN2Gz/+"Iy͎u8rm/]x M]"Q(UQhӕA ֆ)39tB/|B{ `D0Do:`( [ gП@r&@B0G ­c +%T˗^H&)GǨSchpD"!,}etOGX0:y ]*bJ/ޫZ5fP1& QR1"3_ 6T҆m~^n2LEGu@yjdl}JԬϾ`8{لyCD%Db&Ȟ&j6l YT/d# M`5ǰ=KԬϾHW G~5^jOM;B鲧y,os/wowW}!G' ^_iG-] ǿ{لn^i6^1&h;YW"7eGJ& nlt>q)% 9=/% rt$E4釵f_+zݎB(|&t`$H*(G:"cTx^6#; Þh{.'ׁd_HbxMu-aQpbҚajpt& sRf֍gSy^K!h8v\r}U[J#9+hs&}7K75,4:|ʥQy ~|8,<9q9'I|0x)mfDW~F} WF}97mF}Ip0ʨ5K ڗ3 Dքaiր;C}Zk"NUZ|09[Z*ܣ&#FW֚Y1ZTֺ9ps1 f? 65J%QHxN}y@ j޼kǬ_P[EKNl܁?}' %8&t\43Z'SrLU}imQ$HRo|u,u5K.uޞf]BYFl /q}Axb~HkLF_C'#pTw2%[eYT6]/˗yι0hL"HB6 !oBI:/ybk9`AԎȡ6RXeipm6&E F ' ίSv6\Ǡ & Z DRkfi!pU1Ehs=?D9&瓝ޙ{">S|'yOutt'_3]Z67mCnQ4L&9 މ1R Wmu7J&߰\P9vC^Y$߄ &9|wBhK4ykL^/FRԯ)Ɣ.cb}ˇM>)N ?:ӯE`TrC!EA%u\2˻D)#/}LY`jjF[!T++FVR qqxw1}_nU7_|?W܀_8RpWgqiTvpV,|ڻ߯+/}njI~x΁&:4۩ B tfK|ӿ<_et' ۷qZQl#! Soэx|d֭ @;XWc$J^ͺfu!rT>ζ8Y7Iڄuk+:i&mQ$u[֌ֵn] Oѕ1) Px楅ka]GǻZ|R/u4 :uBZJ˨>[;O 5kkbAQ H (wI\",2b:OxR|_;݊!My;<'ֳZ%V}37K, Qr^EoOf[ڏ_c-W=S\.e9?{zo}x=; Ge X%mQr-H%؀RVa1=RbK6J^..Y߳_VA;qҊF^x'k}j)o"8)caqT 8SNr5^P]qӬ*^E+T~{%Ry}~N>?|AIX~eN4~sw-mI[*!9ImnYMrr5G@HA.tkE/??lKs0u=#kT3j*^ho-gSDxu/Ťma5 allqKwx%UY;=!\U'6hcÓY HWow[.&VIY(L,cWD7lQ|~bƣ ?_|OHKJ~t&iu'%WtR'9>r[2*IY>׻ ER oj]CyՄ4pF}r/3L .@C^djur->:Fviwnn]h+W |ecVh_кGuBhb.Wu2YpuBC^)x@v ULS[PtN:A΄Qs fN{"ge elL`Z/ Q'cB8AuX&W!C;})sk;ۭzu#WkZn\t>S/c節=7!V7젩r^ݑrwg`ycsW=, +䮢X@ι6Zzvu5bJI9tf6YEO^t.?3rmŢC/qLC,4n{o]tt=z`E<վ$"Ɍ;,(%,"M2dcɤ3Fq:HVVRgec:#"T6P3(6f66$1 Tf;Sf?PŴIlV=@oyq%V0/`[S-fseN=%7×sozFdGS4Aug{ngս[ݮMj'w4m**|FwauyC9D#2Qt0>B"~}Z϶/g|YIyŐpJOo8zgتOl 8U:6ZJl!2&#$52EdD* l@H spfflYBk@͖vj4ADO <$u09;dJhrR@x0X 0AV.wm"t aT?d+QHhB5Ūr-^]`  R߅"y>m̯?j!t&9'SzOӅr )|qd0Sz->:Fv8ayo-|Ӻu!\EtsZ#1/-xUԨGpOTiidYٮͣw^6PbQa ࣼ .'d?_8/wD-~Df P{;sW/!PXZ,OrayBe!Br$,_BW}k-͛rT#%AzI 9iFS lfd(Ni4˰1V(% J`bu*ZVެ\ĸ;9G&N6Lq" )Jbq#r&oRvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000004756132115133723224017710 0ustar rootrootJan 20 14:50:03 crc systemd[1]: Starting Kubernetes Kubelet... Jan 20 14:50:03 crc restorecon[4750]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.562419 4949 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568305 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568337 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568347 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568356 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568367 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568375 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568383 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568391 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568399 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568407 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568427 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568436 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568443 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568451 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568458 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568466 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568477 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568487 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568497 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568506 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568514 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568547 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568557 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568566 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568575 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568583 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568593 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568602 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568609 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568620 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568629 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568638 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568646 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568655 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568663 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568671 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568680 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568688 4949 feature_gate.go:330] unrecognized feature gate: Example Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568695 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568703 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568711 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568718 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568726 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568734 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568744 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568753 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568762 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568770 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568778 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568785 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568793 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568800 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568808 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568816 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568824 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568832 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568839 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568847 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568855 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568862 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568870 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568878 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568885 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568894 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568901 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568911 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568918 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568926 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568934 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568942 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568949 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569341 4949 flags.go:64] FLAG: --address="0.0.0.0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569361 4949 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569376 4949 flags.go:64] FLAG: --anonymous-auth="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569387 4949 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569399 4949 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569408 4949 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569420 4949 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569432 4949 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569442 4949 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569451 4949 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569460 4949 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569472 4949 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569481 4949 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569491 4949 flags.go:64] FLAG: --cgroup-root="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569500 4949 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569510 4949 flags.go:64] FLAG: --client-ca-file="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569547 4949 flags.go:64] FLAG: --cloud-config="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569556 4949 flags.go:64] FLAG: --cloud-provider="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569565 4949 flags.go:64] FLAG: --cluster-dns="[]" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569577 4949 flags.go:64] FLAG: --cluster-domain="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569586 4949 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569595 4949 flags.go:64] FLAG: --config-dir="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569604 4949 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569614 4949 flags.go:64] FLAG: --container-log-max-files="5" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569626 4949 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569635 4949 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569644 4949 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569654 4949 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569663 4949 flags.go:64] FLAG: --contention-profiling="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569672 4949 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569681 4949 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569691 4949 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569700 4949 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569710 4949 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569719 4949 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569728 4949 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569737 4949 flags.go:64] FLAG: --enable-load-reader="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569747 4949 flags.go:64] FLAG: --enable-server="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569756 4949 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569767 4949 flags.go:64] FLAG: --event-burst="100" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569776 4949 flags.go:64] FLAG: --event-qps="50" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569785 4949 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569794 4949 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569802 4949 flags.go:64] FLAG: --eviction-hard="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569813 4949 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569822 4949 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569831 4949 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569842 4949 flags.go:64] FLAG: --eviction-soft="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569852 4949 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569861 4949 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569870 4949 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569879 4949 flags.go:64] FLAG: --experimental-mounter-path="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569888 4949 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569896 4949 flags.go:64] FLAG: --fail-swap-on="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569905 4949 flags.go:64] FLAG: --feature-gates="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569916 4949 flags.go:64] FLAG: --file-check-frequency="20s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569925 4949 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569934 4949 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569943 4949 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569952 4949 flags.go:64] FLAG: --healthz-port="10248" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569961 4949 flags.go:64] FLAG: --help="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569970 4949 flags.go:64] FLAG: --hostname-override="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569979 4949 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569988 4949 flags.go:64] FLAG: --http-check-frequency="20s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569997 4949 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570006 4949 flags.go:64] FLAG: --image-credential-provider-config="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570016 4949 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570025 4949 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570035 4949 flags.go:64] FLAG: --image-service-endpoint="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570045 4949 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570054 4949 flags.go:64] FLAG: --kube-api-burst="100" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570063 4949 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570072 4949 flags.go:64] FLAG: --kube-api-qps="50" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570081 4949 flags.go:64] FLAG: --kube-reserved="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570090 4949 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570099 4949 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570108 4949 flags.go:64] FLAG: --kubelet-cgroups="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570117 4949 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570126 4949 flags.go:64] FLAG: --lock-file="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570136 4949 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570145 4949 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570155 4949 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570168 4949 flags.go:64] FLAG: --log-json-split-stream="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570177 4949 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570186 4949 flags.go:64] FLAG: --log-text-split-stream="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570195 4949 flags.go:64] FLAG: --logging-format="text" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570204 4949 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570214 4949 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570222 4949 flags.go:64] FLAG: --manifest-url="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570231 4949 flags.go:64] FLAG: --manifest-url-header="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570242 4949 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570251 4949 flags.go:64] FLAG: --max-open-files="1000000" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570262 4949 flags.go:64] FLAG: --max-pods="110" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570271 4949 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570280 4949 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570289 4949 flags.go:64] FLAG: --memory-manager-policy="None" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570298 4949 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570307 4949 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570316 4949 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570325 4949 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570344 4949 flags.go:64] FLAG: --node-status-max-images="50" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570354 4949 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570363 4949 flags.go:64] FLAG: --oom-score-adj="-999" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570372 4949 flags.go:64] FLAG: --pod-cidr="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570381 4949 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570393 4949 flags.go:64] FLAG: --pod-manifest-path="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570402 4949 flags.go:64] FLAG: --pod-max-pids="-1" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570412 4949 flags.go:64] FLAG: --pods-per-core="0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570421 4949 flags.go:64] FLAG: --port="10250" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570430 4949 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570439 4949 flags.go:64] FLAG: --provider-id="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570448 4949 flags.go:64] FLAG: --qos-reserved="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570457 4949 flags.go:64] FLAG: --read-only-port="10255" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570466 4949 flags.go:64] FLAG: --register-node="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570475 4949 flags.go:64] FLAG: --register-schedulable="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570484 4949 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570499 4949 flags.go:64] FLAG: --registry-burst="10" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570508 4949 flags.go:64] FLAG: --registry-qps="5" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570547 4949 flags.go:64] FLAG: --reserved-cpus="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570557 4949 flags.go:64] FLAG: --reserved-memory="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570569 4949 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570578 4949 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570587 4949 flags.go:64] FLAG: --rotate-certificates="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570596 4949 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570605 4949 flags.go:64] FLAG: --runonce="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570614 4949 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570623 4949 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570632 4949 flags.go:64] FLAG: --seccomp-default="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570640 4949 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570650 4949 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570659 4949 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570668 4949 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570677 4949 flags.go:64] FLAG: --storage-driver-password="root" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570686 4949 flags.go:64] FLAG: --storage-driver-secure="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570695 4949 flags.go:64] FLAG: --storage-driver-table="stats" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570703 4949 flags.go:64] FLAG: --storage-driver-user="root" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570712 4949 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570721 4949 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570730 4949 flags.go:64] FLAG: --system-cgroups="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570739 4949 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570753 4949 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570761 4949 flags.go:64] FLAG: --tls-cert-file="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570770 4949 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570782 4949 flags.go:64] FLAG: --tls-min-version="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570792 4949 flags.go:64] FLAG: --tls-private-key-file="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570801 4949 flags.go:64] FLAG: --topology-manager-policy="none" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570810 4949 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570819 4949 flags.go:64] FLAG: --topology-manager-scope="container" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570828 4949 flags.go:64] FLAG: --v="2" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570839 4949 flags.go:64] FLAG: --version="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570850 4949 flags.go:64] FLAG: --vmodule="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570861 4949 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570870 4949 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571100 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571112 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571122 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571131 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571140 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571150 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571159 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571167 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571175 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571183 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571192 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571200 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571208 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571215 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571223 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571231 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571239 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571247 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571254 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571262 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571270 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571278 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571286 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571299 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571307 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571318 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571327 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571336 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571344 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571353 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571363 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571373 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571383 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571392 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571404 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571412 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571420 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571429 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571438 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571446 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571454 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571463 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571473 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571483 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571492 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571500 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571509 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571540 4949 feature_gate.go:330] unrecognized feature gate: Example Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571548 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571556 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571566 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571574 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571583 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571591 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571599 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571609 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571617 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571624 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571633 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571641 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571648 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571656 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571663 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571671 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571679 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571686 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571697 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571704 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571712 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571720 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571728 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.572011 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.583695 4949 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.583741 4949 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583869 4949 feature_gate.go:330] unrecognized feature gate: Example Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583886 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583896 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583908 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583917 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583926 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583934 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583943 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583952 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583959 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583967 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583975 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583983 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583992 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584000 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584008 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584015 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584023 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584031 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584042 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584055 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584065 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584075 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584085 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584096 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584105 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584113 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584122 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584130 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584139 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584148 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584156 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584166 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584177 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584186 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584195 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584203 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584212 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584220 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584228 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584236 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584244 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584252 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584259 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584267 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584275 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584283 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584292 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584300 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584308 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584316 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584323 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584331 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584339 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584347 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584356 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584364 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584372 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584379 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584387 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584396 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584404 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584412 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584420 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584428 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584436 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584444 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584451 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584459 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584468 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584476 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.584489 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584785 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584801 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584811 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584820 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584828 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584836 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584844 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584852 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584861 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584869 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584876 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584884 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584893 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584901 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584913 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584922 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584931 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584939 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584947 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584955 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584963 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584971 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584979 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584988 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584995 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585003 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585011 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585018 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585026 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585034 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585042 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585049 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585057 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585064 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585072 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585080 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585088 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585095 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585103 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585112 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585120 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585128 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585135 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585143 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585151 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585158 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585167 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585174 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585182 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585190 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585198 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585205 4949 feature_gate.go:330] unrecognized feature gate: Example Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585213 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585221 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585228 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585237 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585246 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585256 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585263 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585271 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585279 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585287 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585295 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585302 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585314 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585324 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585334 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585344 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585352 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585362 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585370 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.585383 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.585639 4949 server.go:940] "Client rotation is on, will bootstrap in background" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.591481 4949 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.591656 4949 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.592588 4949 server.go:997] "Starting client certificate rotation" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.592633 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.592901 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-10 07:27:22.631201534 +0000 UTC Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.593044 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.607392 4949 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.609638 4949 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.610381 4949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.622694 4949 log.go:25] "Validated CRI v1 runtime API" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.656664 4949 log.go:25] "Validated CRI v1 image API" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.659720 4949 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.663444 4949 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-20-14-45-46-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.663909 4949 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.692842 4949 manager.go:217] Machine: {Timestamp:2026-01-20 14:50:04.69092279 +0000 UTC m=+0.500753688 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3efd1f11-fa35-4658-a27c-ab73770bda97 BootID:18da5c89-38cf-46f2-855c-9ee31684d8b7 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:72:be:e5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:72:be:e5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f3:4c:a5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ad:0a:0a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8e:ff:ed Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:54:6a:84 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:8d:f8:64 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:3c:cb:a1:94:bc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:36:23:a1:07:12:b9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.693147 4949 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.693430 4949 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.694778 4949 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695144 4949 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695198 4949 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695567 4949 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695592 4949 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695927 4949 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695992 4949 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.696290 4949 state_mem.go:36] "Initialized new in-memory state store" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.696864 4949 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697881 4949 kubelet.go:418] "Attempting to sync node with API server" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697913 4949 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697940 4949 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697961 4949 kubelet.go:324] "Adding apiserver pod source" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697978 4949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.700090 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.700176 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.700217 4949 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.700352 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.700463 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.700778 4949 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.705875 4949 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.706857 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.706905 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.706925 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.706943 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707019 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707036 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707061 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707084 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707101 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707114 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707152 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707165 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.709403 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.710164 4949 server.go:1280] "Started kubelet" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.711265 4949 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.711353 4949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.712131 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc systemd[1]: Started Kubernetes Kubelet. Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.712749 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.712780 4949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.713012 4949 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.713028 4949 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.713043 4949 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.713131 4949 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.712974 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:28:53.501819622 +0000 UTC Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.714742 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="200ms" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.714790 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.715189 4949 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.715761 4949 factory.go:55] Registering systemd factory Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.715797 4949 factory.go:221] Registration of the systemd container factory successfully Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.715153 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c77eaf79d97de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 14:50:04.710115294 +0000 UTC m=+0.519946212,LastTimestamp:2026-01-20 14:50:04.710115294 +0000 UTC m=+0.519946212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.716015 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716255 4949 factory.go:153] Registering CRI-O factory Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716289 4949 factory.go:221] Registration of the crio container factory successfully Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716406 4949 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716447 4949 factory.go:103] Registering Raw factory Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716478 4949 manager.go:1196] Started watching for new ooms in manager Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.724028 4949 manager.go:319] Starting recovery of all containers Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.724177 4949 server.go:460] "Adding debug handlers to kubelet server" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.731952 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732009 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732030 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732044 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732060 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732076 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732090 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732105 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732120 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732135 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732150 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732163 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732176 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732192 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732213 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732228 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732245 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732260 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732273 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732287 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732299 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732314 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732328 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732341 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732354 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732368 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732384 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732400 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732413 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732425 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732438 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732452 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732466 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732509 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732546 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732560 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732573 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732586 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732598 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732611 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732623 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732636 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732649 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732660 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732673 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732685 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732698 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732711 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732724 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732739 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732753 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732767 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732784 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732798 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732814 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732832 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732845 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732859 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732871 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732883 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732896 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732909 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732921 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732935 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732949 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732964 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732977 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732989 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733002 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733015 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733027 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733040 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733053 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733066 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733080 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733093 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733108 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733122 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733136 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733150 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733164 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733177 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733190 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733204 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733218 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733231 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733243 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733259 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733273 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733289 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733301 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733322 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733336 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733349 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733363 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733379 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733393 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733405 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733417 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733429 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733441 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733454 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733466 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733479 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733497 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733512 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733554 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733569 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733584 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733599 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733613 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733628 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733644 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733658 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733673 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733687 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733700 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733714 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733727 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733743 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733755 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733769 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733782 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733794 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733807 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733822 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733836 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733848 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733872 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733886 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733898 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733912 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733924 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733938 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733952 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733966 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733980 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733996 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734012 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734025 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734038 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734052 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734301 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734316 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734329 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734341 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734353 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734365 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734378 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734389 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734405 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734418 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734432 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734445 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734456 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734467 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734480 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734490 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734503 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734559 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734577 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734588 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734599 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734609 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734623 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734636 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734648 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734659 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734671 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734683 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734696 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734707 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734719 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734733 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734746 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734758 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734772 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734784 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734796 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734809 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734823 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734837 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734849 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734868 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734880 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734893 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734906 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734918 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734929 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734976 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734993 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735008 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735025 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735039 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735052 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735065 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735260 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735273 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735286 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735303 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735315 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735328 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735394 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735410 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736315 4949 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736345 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736360 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736373 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736388 4949 reconstruct.go:97] "Volume reconstruction finished" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736398 4949 reconciler.go:26] "Reconciler: start to sync state" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.754058 4949 manager.go:324] Recovery completed Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.770713 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.777788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.777839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.777851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.780671 4949 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.780712 4949 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.780753 4949 state_mem.go:36] "Initialized new in-memory state store" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.785051 4949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.787629 4949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.787690 4949 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.787716 4949 kubelet.go:2335] "Starting kubelet main sync loop" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.787780 4949 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.790308 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.790782 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.793944 4949 policy_none.go:49] "None policy: Start" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.794799 4949 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.794832 4949 state_mem.go:35] "Initializing new in-memory state store" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.815864 4949 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.850621 4949 manager.go:334] "Starting Device Plugin manager" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.850689 4949 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.850705 4949 server.go:79] "Starting device plugin registration server" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851265 4949 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851288 4949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851542 4949 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851638 4949 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851648 4949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.862355 4949 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.888600 4949 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.888747 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890233 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890273 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890285 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890419 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890764 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890825 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891373 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891632 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891680 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895331 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895355 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895363 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896433 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896546 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896586 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897999 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.898014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.898182 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.898211 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.899423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.899454 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.899466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.915860 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="400ms" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.937919 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.937964 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.937991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938014 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938064 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938100 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938128 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938150 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938170 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938191 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938251 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938414 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938471 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.951706 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.953111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.953167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.953180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.953211 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.953828 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039772 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039803 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039834 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039912 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039943 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039977 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039982 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040050 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040086 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040113 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040127 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040168 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040282 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040330 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040372 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040433 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040501 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040601 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040668 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040801 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040868 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040170 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.041083 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.154832 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.156823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.156882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.156897 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.156935 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.157544 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.226960 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.237021 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.260242 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.284286 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e25e844a086d0ce9db9928e222b8dcbb1a1937a11b8beaceb519ee154535ad8c WatchSource:0}: Error finding container e25e844a086d0ce9db9928e222b8dcbb1a1937a11b8beaceb519ee154535ad8c: Status 404 returned error can't find the container with id e25e844a086d0ce9db9928e222b8dcbb1a1937a11b8beaceb519ee154535ad8c Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.285430 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c84d0b8844203d4b70ef500b9b75accc4de9dd99b14b49b84030e1cc0ebdb62d WatchSource:0}: Error finding container c84d0b8844203d4b70ef500b9b75accc4de9dd99b14b49b84030e1cc0ebdb62d: Status 404 returned error can't find the container with id c84d0b8844203d4b70ef500b9b75accc4de9dd99b14b49b84030e1cc0ebdb62d Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.287762 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.291149 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-34fae220f67971a06228b59a6b1a4fe1c17e08023b0caccdebcf68e7be94a256 WatchSource:0}: Error finding container 34fae220f67971a06228b59a6b1a4fe1c17e08023b0caccdebcf68e7be94a256: Status 404 returned error can't find the container with id 34fae220f67971a06228b59a6b1a4fe1c17e08023b0caccdebcf68e7be94a256 Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.294334 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.310206 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d410656dfbcb994567dce6b90781c57b52b0030f91fbcf1167a7585e2bb5f00d WatchSource:0}: Error finding container d410656dfbcb994567dce6b90781c57b52b0030f91fbcf1167a7585e2bb5f00d: Status 404 returned error can't find the container with id d410656dfbcb994567dce6b90781c57b52b0030f91fbcf1167a7585e2bb5f00d Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.316787 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="800ms" Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.316949 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f7525c4e1c25356c83de59ad475a1e37782a795454afdff4f381280d6ac4f590 WatchSource:0}: Error finding container f7525c4e1c25356c83de59ad475a1e37782a795454afdff4f381280d6ac4f590: Status 404 returned error can't find the container with id f7525c4e1c25356c83de59ad475a1e37782a795454afdff4f381280d6ac4f590 Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.471234 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c77eaf79d97de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 14:50:04.710115294 +0000 UTC m=+0.519946212,LastTimestamp:2026-01-20 14:50:04.710115294 +0000 UTC m=+0.519946212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.558386 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.560773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.560818 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.560830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.560855 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.561352 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.712950 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.714095 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:21:00.186034454 +0000 UTC Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.741790 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.742025 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.796564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.796750 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d410656dfbcb994567dce6b90781c57b52b0030f91fbcf1167a7585e2bb5f00d"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.799310 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.799372 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"34fae220f67971a06228b59a6b1a4fe1c17e08023b0caccdebcf68e7be94a256"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.799548 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.802688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.802737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.802756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.802969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.803033 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c84d0b8844203d4b70ef500b9b75accc4de9dd99b14b49b84030e1cc0ebdb62d"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.803166 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.804128 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.804160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.804172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.805370 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.805416 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e25e844a086d0ce9db9928e222b8dcbb1a1937a11b8beaceb519ee154535ad8c"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.805587 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.806449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.806483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.806495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.807215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.807266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f7525c4e1c25356c83de59ad475a1e37782a795454afdff4f381280d6ac4f590"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.807359 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.808138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.808183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.808199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: W0120 14:50:06.104738 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.104843 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.118139 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="1.6s" Jan 20 14:50:06 crc kubenswrapper[4949]: W0120 14:50:06.118279 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.118451 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:06 crc kubenswrapper[4949]: W0120 14:50:06.244419 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.244503 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.362217 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.364156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.364217 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.364231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.364270 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.365042 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.713436 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.714318 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:47:47.771411663 +0000 UTC Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.726610 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.728140 4949 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.811057 4949 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209" exitCode=0 Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.811141 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.811278 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.812300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.812346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.812359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.814538 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92" exitCode=0 Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.814626 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.814795 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.815771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.815832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.815850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821350 4949 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3" exitCode=0 Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821417 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821502 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821554 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821569 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821723 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.822988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.823100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.823121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.825270 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.825322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.825342 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.825373 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.826872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.826908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.826923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.827897 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021" exitCode=0 Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.827965 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.828109 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.828986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.829036 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.829055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.833843 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.835183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.835210 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.835222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.713066 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.715200 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:12:29.683369653 +0000 UTC Jan 20 14:50:07 crc kubenswrapper[4949]: E0120 14:50:07.718741 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="3.2s" Jan 20 14:50:07 crc kubenswrapper[4949]: W0120 14:50:07.808437 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:07 crc kubenswrapper[4949]: E0120 14:50:07.808550 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.835072 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.835119 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.835130 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.835141 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.836711 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.836799 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.837529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.837556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.837566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838491 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530" exitCode=0 Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838546 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838572 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838593 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838702 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838930 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.841009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.841107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.841125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.844248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.844289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.844304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.845083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.845112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.845125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.965732 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.967161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.967208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.967223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.967250 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.368449 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.716205 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:01:39.28803442 +0000 UTC Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.797088 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.846278 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f" exitCode=0 Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.846593 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f"} Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.846835 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.848469 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.848564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.848590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.854235 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475"} Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.854274 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.854454 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.855368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.855436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.855461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.856641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.856704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.856729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.438453 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.717265 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:02:09.111589546 +0000 UTC Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863037 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e"} Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863101 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5"} Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863123 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7"} Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863221 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863298 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863728 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865632 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.717806 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:08:09.802952828 +0000 UTC Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.873936 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b"} Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.874000 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.874020 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e"} Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.874060 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.874098 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876149 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876187 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876206 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.980779 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.091670 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.297747 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.298021 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.299691 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.299732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.299741 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.718569 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:26:10.009860568 +0000 UTC Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.797115 4949 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.797214 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.877777 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.877927 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879820 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.880005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.498814 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.499094 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.501290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.501361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.501383 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.719224 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:05:24.977805728 +0000 UTC Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.274911 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.275192 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.276819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.276875 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.276901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.288221 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.288452 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.289803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.289840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.289851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.720344 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:38:50.432247762 +0000 UTC Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.721445 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:43:56.15909642 +0000 UTC Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.728660 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.728975 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.730464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.730497 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.730510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:14 crc kubenswrapper[4949]: E0120 14:50:14.862540 4949 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.721767 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:02:10.61842003 +0000 UTC Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.755285 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.755505 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.757303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.757360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.757377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.762807 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.893727 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.895223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.895284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.895305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.900458 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.722874 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:49:51.28621141 +0000 UTC Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.895854 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.896775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.896828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.896850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:17 crc kubenswrapper[4949]: I0120 14:50:17.723604 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:28:37.916380004 +0000 UTC Jan 20 14:50:17 crc kubenswrapper[4949]: E0120 14:50:17.968504 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 20 14:50:18 crc kubenswrapper[4949]: W0120 14:50:18.054334 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.054465 4949 trace.go:236] Trace[620736880]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 14:50:08.052) (total time: 10001ms): Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[620736880]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:50:18.054) Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[620736880]: [10.001882519s] [10.001882519s] END Jan 20 14:50:18 crc kubenswrapper[4949]: E0120 14:50:18.054496 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 14:50:18 crc kubenswrapper[4949]: W0120 14:50:18.151134 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.151260 4949 trace.go:236] Trace[1023481260]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 14:50:08.149) (total time: 10001ms): Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[1023481260]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:50:18.151) Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[1023481260]: [10.001819364s] [10.001819364s] END Jan 20 14:50:18 crc kubenswrapper[4949]: E0120 14:50:18.151292 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.714500 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.724723 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:56:30.053882298 +0000 UTC Jan 20 14:50:18 crc kubenswrapper[4949]: W0120 14:50:18.936108 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.936211 4949 trace.go:236] Trace[952145599]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 14:50:08.933) (total time: 10002ms): Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[952145599]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:50:18.936) Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[952145599]: [10.002375283s] [10.002375283s] END Jan 20 14:50:18 crc kubenswrapper[4949]: E0120 14:50:18.936234 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.466402 4949 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.466488 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.473321 4949 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.473384 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.725062 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:54:28.744797727 +0000 UTC Jan 20 14:50:20 crc kubenswrapper[4949]: I0120 14:50:20.725307 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:29:48.549594684 +0000 UTC Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.169614 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.171147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.171207 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.171231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.171278 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:21 crc kubenswrapper[4949]: E0120 14:50:21.177126 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.671665 4949 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.726394 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:59:12.740320599 +0000 UTC Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.797899 4949 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.798295 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 14:50:22 crc kubenswrapper[4949]: I0120 14:50:22.728143 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:17:12.315102702 +0000 UTC Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.112058 4949 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.296485 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.303619 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.548233 4949 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.712210 4949 apiserver.go:52] "Watching apiserver" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.716179 4949 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.716657 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.717236 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:23 crc kubenswrapper[4949]: E0120 14:50:23.717374 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.717452 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.717467 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:23 crc kubenswrapper[4949]: E0120 14:50:23.717553 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.717890 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.718001 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.718115 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:23 crc kubenswrapper[4949]: E0120 14:50:23.718216 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.719658 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.720625 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.720787 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.720816 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.720803 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.721259 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.721339 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.721856 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.723121 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.728361 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:00:35.616469918 +0000 UTC Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.755931 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.772260 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.791575 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.810570 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.814236 4949 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.825571 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.842490 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.859621 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.876607 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.914233 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 14:50:23 crc kubenswrapper[4949]: E0120 14:50:23.923399 4949 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.453471 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.456951 4949 trace.go:236] Trace[863191421]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 14:50:13.779) (total time: 10677ms): Jan 20 14:50:24 crc kubenswrapper[4949]: Trace[863191421]: ---"Objects listed" error: 10677ms (14:50:24.456) Jan 20 14:50:24 crc kubenswrapper[4949]: Trace[863191421]: [10.677092457s] [10.677092457s] END Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.457016 4949 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.459354 4949 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.476403 4949 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.492074 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.509128 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.526369 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.540954 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.557712 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560118 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560222 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560260 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560289 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560313 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560338 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560366 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560397 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560445 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560487 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560540 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560566 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560587 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560613 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560645 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560635 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560670 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560784 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560830 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560868 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560906 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560940 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560981 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561015 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561049 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561083 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561088 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561117 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561154 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561188 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561230 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561266 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561302 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561335 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561368 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561406 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561442 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561475 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561511 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561581 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561615 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561652 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561684 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561697 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561756 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561790 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561850 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561887 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561923 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561928 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561939 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561959 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562063 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562125 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562128 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562180 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562218 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562261 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562303 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562341 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562432 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562471 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562506 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562568 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562607 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562643 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562678 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562716 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562754 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562794 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562829 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562897 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562936 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562974 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563013 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563058 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563108 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563150 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563286 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563325 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563362 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563402 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563443 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563481 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563547 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563584 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563622 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563660 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563697 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563739 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563779 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563819 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563858 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563895 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563933 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563976 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564014 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564094 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564133 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564200 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564245 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564281 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564320 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564361 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564396 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564461 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564499 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564730 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564770 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564808 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564858 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564913 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564965 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565512 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565620 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565681 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565773 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565813 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565850 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565885 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565937 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565996 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566044 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566100 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566146 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566198 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566314 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566371 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566426 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566580 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566645 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566698 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566751 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566807 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566854 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566891 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566931 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566970 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567008 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567059 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567098 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567133 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567172 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567254 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567293 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567331 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567369 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567408 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567448 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567559 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567601 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567638 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567676 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567713 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567754 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567792 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567831 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567869 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567908 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567947 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568000 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568055 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568095 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568135 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568174 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568213 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568251 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568288 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568329 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568370 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568409 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568448 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568487 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568555 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568595 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568633 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568671 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568711 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568748 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568786 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568827 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568867 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568905 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568946 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568982 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569022 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569099 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569139 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569177 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569217 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569338 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569426 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569478 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569549 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569600 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569645 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569727 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569816 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569854 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569895 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569997 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570026 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570053 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570081 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570104 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570126 4949 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580120 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562256 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562321 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562408 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.582494 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562590 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562690 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562934 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562958 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563107 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563387 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563504 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563763 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564264 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564337 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564621 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564647 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564619 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564714 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564890 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564976 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565085 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565128 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565568 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565765 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565911 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566029 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566067 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566188 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566205 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566231 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566251 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566341 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566437 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566715 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566729 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566818 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566873 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566954 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568058 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568567 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568581 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568640 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568997 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569138 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569335 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569362 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569745 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570056 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570111 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570169 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570198 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570731 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570867 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570981 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571039 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571367 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571432 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571864 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571912 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571976 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572042 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572476 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572455 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.572856 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.072831568 +0000 UTC m=+20.882662426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.583652 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.583680 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.583783 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.584082 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572887 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572884 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573128 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573145 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573302 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573671 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.584237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573842 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573879 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573960 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.574172 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.574457 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.574564 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.574798 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.575060 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.575677 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.575971 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576149 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576205 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576497 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576914 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576937 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577060 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577088 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577105 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577784 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577865 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578084 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578418 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578485 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578807 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.579234 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.579315 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.579627 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580256 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580374 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580759 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580780 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580841 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.581144 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.581223 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.581670 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.581575 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.582181 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.582240 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.582374 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.584274 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.584274 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585004 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585432 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585553 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585782 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585867 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585969 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.586046 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.586253 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.586310 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.586430 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.587063 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.587371 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.587888 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.587891 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.588777 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.591482 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.591582 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.091562082 +0000 UTC m=+20.901392950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.591673 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592037 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592079 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592375 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592410 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592492 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592773 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592819 4949 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.592920 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.593025 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.093004739 +0000 UTC m=+20.902835607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593056 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593142 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593243 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593640 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593923 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.594044 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.598674 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.600833 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.601483 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.605446 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.605623 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.605657 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.605735 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.605817 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.606267 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.606756 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.607885 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.612035 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.612769 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.613080 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.613893 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.613989 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.616807 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.620591 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.620694 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.120673951 +0000 UTC m=+20.930504809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.623713 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.624409 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.637959 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.647960 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.647993 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.648018 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.648065 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.148050083 +0000 UTC m=+20.957880941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.648634 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.650191 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.650656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.638308 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.659065 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.659878 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.659940 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660219 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660280 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660418 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660657 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660971 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.663001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.665835 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.665854 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666108 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666327 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666410 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666751 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666843 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.667270 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.667341 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.667890 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.667953 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.670682 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.670826 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.670892 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.670946 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671071 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671171 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671224 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671242 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671256 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671265 4949 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671274 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671284 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671297 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671308 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671317 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671327 4949 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671336 4949 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671345 4949 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671355 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671364 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671374 4949 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671384 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671394 4949 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671428 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671440 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671448 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671458 4949 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671486 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671499 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671554 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671565 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671576 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671618 4949 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671628 4949 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671637 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671646 4949 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671655 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671664 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671672 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671682 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671690 4949 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671700 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671756 4949 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671768 4949 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671776 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671786 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671796 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671808 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671818 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671828 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671839 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671849 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671879 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671890 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671901 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671722 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671910 4949 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671938 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671952 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671960 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671969 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671977 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671987 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671997 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672006 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672016 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672027 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672035 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672044 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672052 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672061 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672070 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672079 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672089 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672098 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673251 4949 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673315 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673332 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673344 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673363 4949 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673374 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673384 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673395 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673412 4949 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673423 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673433 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673442 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673456 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673465 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673475 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673486 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673501 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673510 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673550 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673566 4949 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673578 4949 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673587 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673596 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673610 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673620 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673630 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673639 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673652 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673662 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673671 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673684 4949 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673693 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673701 4949 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673710 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673722 4949 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673740 4949 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673750 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673760 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673772 4949 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673782 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673791 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673800 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673820 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673832 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673841 4949 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673855 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673865 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673876 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673887 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673902 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673915 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673927 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673938 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673953 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673963 4949 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673973 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674041 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674056 4949 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674070 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674080 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674094 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674103 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674113 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674123 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674135 4949 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674145 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674156 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674165 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674180 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674194 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674206 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674224 4949 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674237 4949 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674246 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674256 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674270 4949 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674280 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674289 4949 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674298 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674313 4949 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674323 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674334 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674347 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674362 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674373 4949 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674383 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674417 4949 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674428 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674438 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674449 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674463 4949 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674472 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674482 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674492 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674528 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674538 4949 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674549 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674564 4949 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674574 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674584 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674593 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674605 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674614 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674625 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674634 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674648 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674658 4949 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674669 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674678 4949 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674693 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674703 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.675933 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.675995 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.676061 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.692910 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.697088 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.729926 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:56:11.822979352 +0000 UTC Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.753735 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.770062 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.770041 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775161 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775200 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775211 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775224 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775236 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775247 4949 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775257 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775266 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775276 4949 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.776186 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.786244 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.791820 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.792392 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.793829 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.794452 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.795463 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.795973 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.796571 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.797531 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.798138 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.798801 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.799178 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.799675 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.800952 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.801544 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.803712 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.804658 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.805356 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.806456 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.807045 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.807871 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.808636 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.809265 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.810085 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.811674 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.812791 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.813385 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.814208 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.814941 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.815323 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.815987 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.816956 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.817698 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.818336 4949 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.818479 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.821642 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.822707 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.824098 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.825954 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.826619 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.827597 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.828214 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.829299 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.829835 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.830652 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.830893 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.832145 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.833496 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.834294 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.835492 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.836321 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.838336 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.839066 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.840718 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.841594 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.842494 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.844166 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.844995 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.845967 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.858046 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.874699 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.892422 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.909479 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.918508 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76"} Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.918600 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b1ab9b79eb264bf0014fb2391f6f422e162c002c2543736ea4894a4d1c67500a"} Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.929470 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.940265 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.947554 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: W0120 14:50:24.955300 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-ab84e4f6e656b7d163d0a6434a4dbd0318ff2f64ea936ddeb2eb251e5d60d1b1 WatchSource:0}: Error finding container ab84e4f6e656b7d163d0a6434a4dbd0318ff2f64ea936ddeb2eb251e5d60d1b1: Status 404 returned error can't find the container with id ab84e4f6e656b7d163d0a6434a4dbd0318ff2f64ea936ddeb2eb251e5d60d1b1 Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.958193 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.967604 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.981774 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.002284 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.023305 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.035399 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.051998 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.061427 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.075602 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.078646 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.078825 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.078799003 +0000 UTC m=+21.888629861 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.086114 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.100868 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.113115 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.179609 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.179662 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.179686 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.179710 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179709 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179777 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.179764228 +0000 UTC m=+21.989595086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179823 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179841 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179853 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179887 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.179876182 +0000 UTC m=+21.989707050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179943 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179954 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179963 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179990 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.179981835 +0000 UTC m=+21.989812703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.180039 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.180065 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.180057258 +0000 UTC m=+21.989888136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.730860 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:29:15.455770432 +0000 UTC Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.788394 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.788394 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.788544 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.788592 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.788495 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.788684 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.805862 4949 csr.go:261] certificate signing request csr-2kwn5 is approved, waiting to be issued Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.843869 4949 csr.go:257] certificate signing request csr-2kwn5 is issued Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.922131 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7"} Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.923533 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b2375cb4dbc005ebf9f503b624410324d7ddca522dfaeee89b2862d39aa1ac60"} Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.925084 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf"} Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.925112 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ab84e4f6e656b7d163d0a6434a4dbd0318ff2f64ea936ddeb2eb251e5d60d1b1"} Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.955574 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.983848 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.003659 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.026222 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.047483 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.063028 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.077535 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.087039 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.087198 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.087175297 +0000 UTC m=+23.897006155 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.095625 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.116814 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.132218 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.145271 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.156202 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.176530 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.188331 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.188387 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.188412 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.188446 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188559 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188593 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188592 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188627 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188641 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188651 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188678 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.18865879 +0000 UTC m=+23.998489648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188604 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188696 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.188688061 +0000 UTC m=+23.998518919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188573 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188713 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.188705981 +0000 UTC m=+23.998536839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188733 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.188718862 +0000 UTC m=+23.998549720 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.192070 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.272291 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.286934 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.706720 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gnfmv"] Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.707027 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.707226 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-sqr5x"] Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.708272 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709193 4949 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709214 4949 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709231 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709283 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709537 4949 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709602 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.709805 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kgqjd"] Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709812 4949 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709969 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709833 4949 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709992 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.710023 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.710164 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.710760 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.711438 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.712851 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.713372 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.713470 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.713641 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.718137 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.725710 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.731222 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:03:12.755829601 +0000 UTC Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.736590 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.754371 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.772359 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.785643 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792439 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxdwg\" (UniqueName: \"kubernetes.io/projected/c0e8f07d-a71c-4c64-96f3-eecb529c1674-kube-api-access-hxdwg\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792491 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-cnibin\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792535 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-rootfs\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792627 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-proxy-tls\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792713 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-binary-copy\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792796 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc62j\" (UniqueName: \"kubernetes.io/projected/da08b8e6-19e1-41fa-8e71-2988f3effb27-kube-api-access-xc62j\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792831 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7xk\" (UniqueName: \"kubernetes.io/projected/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-kube-api-access-5b7xk\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792865 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c0e8f07d-a71c-4c64-96f3-eecb529c1674-hosts-file\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792886 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-os-release\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.793004 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-mcd-auth-proxy-config\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.793035 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-system-cni-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.800989 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.819996 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.833360 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.844703 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-20 14:45:25 +0000 UTC, rotation deadline is 2026-11-20 23:23:23.998892803 +0000 UTC Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.844760 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7304h32m57.154139948s for next certificate rotation Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.844755 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.857565 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.871643 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.883680 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894196 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-binary-copy\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894242 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894268 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc62j\" (UniqueName: \"kubernetes.io/projected/da08b8e6-19e1-41fa-8e71-2988f3effb27-kube-api-access-xc62j\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894292 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7xk\" (UniqueName: \"kubernetes.io/projected/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-kube-api-access-5b7xk\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c0e8f07d-a71c-4c64-96f3-eecb529c1674-hosts-file\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894352 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894375 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-mcd-auth-proxy-config\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894399 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-os-release\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894469 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-system-cni-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894501 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxdwg\" (UniqueName: \"kubernetes.io/projected/c0e8f07d-a71c-4c64-96f3-eecb529c1674-kube-api-access-hxdwg\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894985 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-cnibin\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895040 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-rootfs\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895155 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-cnibin\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894421 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c0e8f07d-a71c-4c64-96f3-eecb529c1674-hosts-file\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894912 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-os-release\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895279 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-rootfs\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895328 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-system-cni-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895093 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-proxy-tls\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895472 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-mcd-auth-proxy-config\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.904453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-proxy-tls\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.908445 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.912622 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7xk\" (UniqueName: \"kubernetes.io/projected/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-kube-api-access-5b7xk\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.921609 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.928950 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc62j\" (UniqueName: \"kubernetes.io/projected/da08b8e6-19e1-41fa-8e71-2988f3effb27-kube-api-access-xc62j\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.932659 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.954171 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.965794 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.979393 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.000862 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.014259 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.032235 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:27 crc kubenswrapper[4949]: W0120 14:50:27.043835 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c9c7916_1f51_47f7_abe3_2ec9cd2a1f5e.slice/crio-28b1ee61635c47fb4b9b6e1d7eb7d86e8087e55f558d57041130383574f7aea1 WatchSource:0}: Error finding container 28b1ee61635c47fb4b9b6e1d7eb7d86e8087e55f558d57041130383574f7aea1: Status 404 returned error can't find the container with id 28b1ee61635c47fb4b9b6e1d7eb7d86e8087e55f558d57041130383574f7aea1 Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.074003 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2szcd"] Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.074424 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.074659 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6zd5"] Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.075495 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.076159 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.076394 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077199 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077324 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077450 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077535 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077673 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.078899 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.079206 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.090883 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.104466 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.117419 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.132133 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.144944 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.157762 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.172169 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.182319 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.195681 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198091 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198140 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-k8s-cni-cncf-io\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198165 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198191 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198219 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198290 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198332 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198354 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198384 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-system-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198404 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-etc-kubernetes\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198424 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198454 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-hostroot\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198563 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-cnibin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198602 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-cni-binary-copy\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198623 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-kubelet\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198644 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9h4l\" (UniqueName: \"kubernetes.io/projected/3ac16078-f295-4f4b-875c-a8505e87b9da-kube-api-access-b9h4l\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198665 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-conf-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198741 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-multus-certs\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198765 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198785 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198807 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-netns\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198829 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-bin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198882 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198916 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-multus\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198935 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198949 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198963 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198980 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199000 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-daemon-config\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-os-release\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199040 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199062 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199136 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-socket-dir-parent\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199166 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199194 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.219533 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.235099 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.249791 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.261400 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.274059 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-os-release\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300193 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300222 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300260 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-socket-dir-parent\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300284 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300307 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300350 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-k8s-cni-cncf-io\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300375 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-os-release\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300445 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-socket-dir-parent\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300459 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300385 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300411 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300485 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300462 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-k8s-cni-cncf-io\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300543 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300574 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300590 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300654 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300677 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300678 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300704 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300749 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-system-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-etc-kubernetes\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300794 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300820 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-etc-kubernetes\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-hostroot\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300853 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-cnibin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300873 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-cni-binary-copy\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300895 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-kubelet\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9h4l\" (UniqueName: \"kubernetes.io/projected/3ac16078-f295-4f4b-875c-a8505e87b9da-kube-api-access-b9h4l\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300928 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-system-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300940 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300960 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-hostroot\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300976 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-conf-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300997 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-multus-certs\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301049 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-netns\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-bin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301121 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-multus\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301144 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301165 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301189 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301218 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301239 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-daemon-config\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301261 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300797 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301327 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301333 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-netns\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301371 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301391 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-cnibin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301398 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-bin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301434 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301473 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-multus\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301476 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-kubelet\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301545 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301857 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-multus-certs\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301863 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-conf-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301894 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.302293 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.302574 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-daemon-config\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.302977 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.305444 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.329577 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.339274 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.367073 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9h4l\" (UniqueName: \"kubernetes.io/projected/3ac16078-f295-4f4b-875c-a8505e87b9da-kube-api-access-b9h4l\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.370810 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.413481 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.416727 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: W0120 14:50:27.427262 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod775d7cfb_d5e3_457d_a7fa_4f0bdb752d04.slice/crio-5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7 WatchSource:0}: Error finding container 5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7: Status 404 returned error can't find the container with id 5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7 Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.430342 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.469345 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.483916 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.497972 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.529361 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.545638 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.559838 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.571509 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.577947 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.579395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.579436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.579449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.579569 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.586283 4949 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.586498 4949 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587481 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.601674 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605166 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.616124 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619126 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619141 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619152 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.630761 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634136 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.650825 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654835 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654932 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654944 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.666977 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.667161 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.668993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.669059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.669070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.669096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.669110 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.731652 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:38:20.682302982 +0000 UTC Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.760466 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771934 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.788939 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.789114 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.789601 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.789695 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.789764 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.789834 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.815258 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.853240 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.855598 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-binary-copy\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.862666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-cni-binary-copy\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874669 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.895420 4949 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.895565 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist podName:da08b8e6-19e1-41fa-8e71-2988f3effb27 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.395538767 +0000 UTC m=+24.205369625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-sqr5x" (UID: "da08b8e6-19e1-41fa-8e71-2988f3effb27") : failed to sync configmap cache: timed out waiting for the condition Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.930396 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.931742 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" exitCode=0 Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.931814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.931859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.933499 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.933550 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.933562 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"28b1ee61635c47fb4b9b6e1d7eb7d86e8087e55f558d57041130383574f7aea1"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.965203 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977164 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.978133 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.993282 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.994077 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.013425 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.017262 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.027882 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.038826 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.050126 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.062680 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.071387 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079505 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079552 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.081010 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.092921 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.105835 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.109296 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.109489 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.109473516 +0000 UTC m=+27.919304374 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.129491 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.138824 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.145414 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.150643 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxdwg\" (UniqueName: \"kubernetes.io/projected/c0e8f07d-a71c-4c64-96f3-eecb529c1674-kube-api-access-hxdwg\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.157753 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.167389 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.181215 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182825 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.193277 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.206082 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.210603 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.210659 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.210721 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.210745 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.210752 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.210830 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.210813533 +0000 UTC m=+28.020644391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.210875 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.210929 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.210913306 +0000 UTC m=+28.020744254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211002 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211018 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211029 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211054 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.21104616 +0000 UTC m=+28.020877128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211123 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211136 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211146 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211177 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.211170254 +0000 UTC m=+28.021001112 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.217304 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.218758 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.229741 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: W0120 14:50:28.233168 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0e8f07d_a71c_4c64_96f3_eecb529c1674.slice/crio-9313f1bb6eb6445d8d48b38b1a8724d2e02bdbbb172e060dcd1480060353a592 WatchSource:0}: Error finding container 9313f1bb6eb6445d8d48b38b1a8724d2e02bdbbb172e060dcd1480060353a592: Status 404 returned error can't find the container with id 9313f1bb6eb6445d8d48b38b1a8724d2e02bdbbb172e060dcd1480060353a592 Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.244776 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.277899 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.285687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.286024 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.286037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.286054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.286069 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.289228 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.307705 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.347850 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.389993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.390036 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.390049 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.390067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.390080 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.412688 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.413379 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494375 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494404 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.526871 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:28 crc kubenswrapper[4949]: W0120 14:50:28.548423 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda08b8e6_19e1_41fa_8e71_2988f3effb27.slice/crio-23bc7ca34848d59dc4adb2c2a8c1b8f62fa8f1e4bd5450fe16a56b4db2ec068b WatchSource:0}: Error finding container 23bc7ca34848d59dc4adb2c2a8c1b8f62fa8f1e4bd5450fe16a56b4db2ec068b: Status 404 returned error can't find the container with id 23bc7ca34848d59dc4adb2c2a8c1b8f62fa8f1e4bd5450fe16a56b4db2ec068b Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597759 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597845 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.701931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.702305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.702319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.702337 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.702350 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.732015 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:21:23.05294025 +0000 UTC Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.803091 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.806598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.806820 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.806886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.806952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.807023 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.807678 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.810989 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.816778 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.835133 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.855938 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.875558 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.898460 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917951 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917991 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.922352 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.941618 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947464 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947508 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947582 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947592 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947603 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.950010 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gnfmv" event={"ID":"c0e8f07d-a71c-4c64-96f3-eecb529c1674","Type":"ContainerStarted","Data":"cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.950038 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gnfmv" event={"ID":"c0e8f07d-a71c-4c64-96f3-eecb529c1674","Type":"ContainerStarted","Data":"9313f1bb6eb6445d8d48b38b1a8724d2e02bdbbb172e060dcd1480060353a592"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.951902 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.951966 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"d4da37859dee95109f10ecb8a58f89743652a53cf9c32e2927206f0f473a79bd"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.953885 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerStarted","Data":"0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.953924 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerStarted","Data":"23bc7ca34848d59dc4adb2c2a8c1b8f62fa8f1e4bd5450fe16a56b4db2ec068b"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.957102 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.974226 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.988419 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hzkk7"] Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.988838 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.989731 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.990091 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.990243 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.990838 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.991464 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.001149 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.013467 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020328 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020355 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.026967 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.048243 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.060700 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.077470 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.091968 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.110717 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.121659 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-serviceca\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.121719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694ct\" (UniqueName: \"kubernetes.io/projected/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-kube-api-access-694ct\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.121745 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-host\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.123709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.123859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.123987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.124029 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.124083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.124258 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.140464 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.180128 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.218176 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.222615 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-host\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.222680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-serviceca\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.222730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694ct\" (UniqueName: \"kubernetes.io/projected/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-kube-api-access-694ct\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.222778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-host\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.223646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-serviceca\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226868 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226923 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.266109 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694ct\" (UniqueName: \"kubernetes.io/projected/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-kube-api-access-694ct\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.278777 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.319499 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329894 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329912 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329926 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.346345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.360635 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.404413 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432493 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432570 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.440785 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.479204 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.535024 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.536098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.536119 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.536138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.536151 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641322 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.732639 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:55:20.077854693 +0000 UTC Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744415 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.788656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:29 crc kubenswrapper[4949]: E0120 14:50:29.788795 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.788928 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.788656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:29 crc kubenswrapper[4949]: E0120 14:50:29.789119 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:29 crc kubenswrapper[4949]: E0120 14:50:29.789433 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847129 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949891 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949902 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949941 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.957705 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4" exitCode=0 Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.957782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.959714 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hzkk7" event={"ID":"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf","Type":"ContainerStarted","Data":"5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.959786 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hzkk7" event={"ID":"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf","Type":"ContainerStarted","Data":"4d0f16022c36144668b61a893493fa80b463928f98cac81ff81851bf5710231f"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.974157 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.003938 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.023143 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.035470 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.046185 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052873 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.058959 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.073993 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.087352 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.101354 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.120676 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.141718 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155558 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.171480 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.195982 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.214310 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.236071 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.257838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.258466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.258553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.258592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.258615 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.259710 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.273608 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.284725 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.299166 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.312646 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.325292 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.357136 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360905 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360936 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.396947 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.439821 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463902 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463912 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.477166 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.518358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.558941 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567259 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567300 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.595977 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.644730 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669844 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.679788 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.732969 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:14:18.775738944 +0000 UTC Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772645 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772681 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875111 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.963863 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50" exitCode=0 Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.963914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977170 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977207 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.993071 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.005292 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.017736 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.028974 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.048373 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.061579 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.075977 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080220 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.088121 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.099619 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.115563 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.128579 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.158088 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.182969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.183014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.183026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.183043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.183054 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.198098 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.234764 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.278068 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286133 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286143 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389378 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389388 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492375 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600387 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600668 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704630 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704657 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.733854 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:11:43.189921566 +0000 UTC Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.788505 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:31 crc kubenswrapper[4949]: E0120 14:50:31.788671 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.788802 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:31 crc kubenswrapper[4949]: E0120 14:50:31.789188 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.789244 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:31 crc kubenswrapper[4949]: E0120 14:50:31.789321 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806783 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806836 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806872 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912624 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912669 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.971039 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.972941 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed" exitCode=0 Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.973474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.990484 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.001541 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.012911 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015254 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015314 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.031358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.046755 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.061083 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.071323 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.083066 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.096120 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.112671 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117746 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117757 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117772 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117783 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.130247 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.147095 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.152989 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.153156 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.153143682 +0000 UTC m=+35.962974540 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.157921 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.175794 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.194142 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221142 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221154 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.253980 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.254044 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.254078 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.254103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254181 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254198 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254213 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254228 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254238 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254244 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254204 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254300 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254295 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.254242102 +0000 UTC m=+36.064072970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254333 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.254324564 +0000 UTC m=+36.064155422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254345 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.254340075 +0000 UTC m=+36.064170933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254365 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.254358855 +0000 UTC m=+36.064189713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.323942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.323988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.323999 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.324016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.324029 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426751 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426765 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529111 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.631968 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.632016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.632027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.632046 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.632057 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.734699 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:07:02.596406635 +0000 UTC Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735385 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838644 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941277 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.980914 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b" exitCode=0 Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.980964 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.996947 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.017249 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.031395 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043447 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043477 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.051485 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.077543 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.097126 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.120323 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151328 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.180037 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.221582 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.241112 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253505 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253542 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.254170 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.277308 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.292167 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.310110 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.321746 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356440 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356542 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458789 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458828 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561297 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561333 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.663939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.663984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.663995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.664014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.664028 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.735170 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:48:53.961380966 +0000 UTC Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766962 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.788549 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.788574 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.788632 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:33 crc kubenswrapper[4949]: E0120 14:50:33.788760 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:33 crc kubenswrapper[4949]: E0120 14:50:33.788910 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:33 crc kubenswrapper[4949]: E0120 14:50:33.789126 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973217 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.988089 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerStarted","Data":"68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.998791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.999655 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.999686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.999699 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.020596 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.029394 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.035659 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.042689 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.064986 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075726 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075802 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075840 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.080272 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.099810 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.117789 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.135296 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.153885 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.174593 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.178858 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.178949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.178972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.179003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.179023 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.191800 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.209208 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.242903 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.259634 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282467 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282949 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.301922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.326236 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.341109 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.358143 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.371364 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385157 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385197 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.391413 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.406254 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.423568 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.437788 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.454380 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.471032 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488407 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488801 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.510203 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.525220 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.542196 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.564166 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591879 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591930 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591968 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.593695 4949 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.735764 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:43:42.563539574 +0000 UTC Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797390 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797407 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.808091 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.828368 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.850160 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.882607 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.899666 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900470 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900511 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.909971 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.925435 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.952689 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.965906 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.979041 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.990290 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002719 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.005224 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708" exitCode=0 Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.005325 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.011978 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.030047 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.045803 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.063345 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.081909 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.095716 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.104911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.104954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.104971 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.104993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.105010 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.115699 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.130459 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.152083 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.167778 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.181891 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.196811 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207777 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207849 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207918 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.211449 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.229976 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.241239 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.252255 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.269358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.283013 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.294303 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.310952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.311237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.311332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.311449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.311540 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.413785 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.413954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.414020 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.414109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.414172 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.517447 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.517816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.517943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.518075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.518238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.723738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.724148 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.724315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.724472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.724730 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.736316 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:20:57.569609283 +0000 UTC Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.788809 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:35 crc kubenswrapper[4949]: E0120 14:50:35.788922 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.789175 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:35 crc kubenswrapper[4949]: E0120 14:50:35.789300 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.789327 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:35 crc kubenswrapper[4949]: E0120 14:50:35.789399 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827890 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827923 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930763 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.011961 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86" exitCode=0 Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.012069 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.026301 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035774 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035826 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.048556 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.064830 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.080128 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.093638 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.112273 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.130823 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138467 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138484 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.163373 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.181884 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.205104 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.219315 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242928 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.245305 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.265306 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.279678 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.291425 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345852 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345873 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345887 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448467 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448572 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551586 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654905 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654917 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.737254 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:10:00.030606219 +0000 UTC Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757954 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861264 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861394 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.963944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.964011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.964040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.964075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.964181 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.023024 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerStarted","Data":"bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.027852 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/0.log" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.032396 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e" exitCode=1 Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.032444 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.033213 4949 scope.go:117] "RemoveContainer" containerID="9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.051929 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066765 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.067361 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.081416 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.098772 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.113341 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.128507 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.147629 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.167023 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169679 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169776 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.179852 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.201795 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.224245 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.254099 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.270599 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272653 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.287281 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.298028 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.316414 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"message\\\":\\\"r removal\\\\nI0120 14:50:36.574868 6200 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 14:50:36.574890 6200 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 14:50:36.574899 6200 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 14:50:36.574933 6200 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 14:50:36.574946 6200 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 14:50:36.574949 6200 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 14:50:36.574987 6200 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:36.574998 6200 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 14:50:36.575003 6200 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 14:50:36.575005 6200 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 14:50:36.575013 6200 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 14:50:36.575024 6200 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 14:50:36.575031 6200 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:36.575101 6200 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 14:50:36.575153 6200 factory.go:656] Stopping watch factory\\\\nI0120 14:50:36.575182 6200 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.331164 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.342281 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.353407 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.365577 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374911 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.392089 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.409570 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.428209 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.447531 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.460432 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.474237 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477158 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477189 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.506695 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.519397 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.537229 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.548355 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579925 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683181 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.737873 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:45:54.35548092 +0000 UTC Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785684 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785747 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785782 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.788851 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.788903 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:37 crc kubenswrapper[4949]: E0120 14:50:37.788964 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.788865 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:37 crc kubenswrapper[4949]: E0120 14:50:37.789077 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:37 crc kubenswrapper[4949]: E0120 14:50:37.789141 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888750 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991159 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991222 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037348 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.038312 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.039438 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/0.log" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.043856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.045174 4949 scope.go:117] "RemoveContainer" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.045421 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.051403 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060601 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060776 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.072497 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.075855 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.084972 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.094778 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.097315 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099501 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099554 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.113964 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.116645 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121628 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.128917 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.137268 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.137392 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138967 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138978 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.145077 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.167808 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.186284 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.211325 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.224766 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241356 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.252660 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"message\\\":\\\"r removal\\\\nI0120 14:50:36.574868 6200 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 14:50:36.574890 6200 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 14:50:36.574899 6200 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 14:50:36.574933 6200 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 14:50:36.574946 6200 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 14:50:36.574949 6200 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 14:50:36.574987 6200 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:36.574998 6200 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 14:50:36.575003 6200 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 14:50:36.575005 6200 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 14:50:36.575013 6200 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 14:50:36.575024 6200 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 14:50:36.575031 6200 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:36.575101 6200 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 14:50:36.575153 6200 factory.go:656] Stopping watch factory\\\\nI0120 14:50:36.575182 6200 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.267378 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.288476 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.303250 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344496 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344559 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448130 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448203 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448239 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550057 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550119 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550149 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653404 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.738465 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:36:08.989457762 +0000 UTC Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757632 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859678 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859744 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859805 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962748 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962907 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.050789 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.051839 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/0.log" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.057087 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" exitCode=1 Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.057145 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.057212 4949 scope.go:117] "RemoveContainer" containerID="9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.058375 4949 scope.go:117] "RemoveContainer" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" Jan 20 14:50:39 crc kubenswrapper[4949]: E0120 14:50:39.058816 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.065961 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.066019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.066041 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.066072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.066097 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.079977 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.099007 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.113894 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.121223 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb"] Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.121948 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.124850 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.125024 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.133401 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.150334 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.166967 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168895 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168905 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168939 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.182701 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.199922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.211540 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.230463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.230560 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.230609 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.230644 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcs88\" (UniqueName: \"kubernetes.io/projected/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-kube-api-access-jcs88\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.241700 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.257368 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.271942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.272029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.272053 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.272083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.272117 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.282191 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.299549 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.316636 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.332149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.332329 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcs88\" (UniqueName: \"kubernetes.io/projected/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-kube-api-access-jcs88\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.332389 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.332478 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.333158 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.333287 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.333891 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.340770 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.349697 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.353936 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcs88\" (UniqueName: \"kubernetes.io/projected/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-kube-api-access-jcs88\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.364873 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377143 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.382321 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.398247 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.419145 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.434012 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.441446 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.463588 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.480494 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.481610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.482026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.482184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.482326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.482478 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.500235 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.522928 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: W0120 14:50:39.546750 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9957b569_5b87_4d8d_bec2_4a5d4a8b891c.slice/crio-1e7bdecc3873579562c8c437c317dd59fffbb519a2e670015c51309d4c87494e WatchSource:0}: Error finding container 1e7bdecc3873579562c8c437c317dd59fffbb519a2e670015c51309d4c87494e: Status 404 returned error can't find the container with id 1e7bdecc3873579562c8c437c317dd59fffbb519a2e670015c51309d4c87494e Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.554129 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.575053 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585604 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585649 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.590403 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.604994 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.623786 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.641782 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687841 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687933 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.739503 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:51:27.138296898 +0000 UTC Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.788022 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.788111 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.788111 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:39 crc kubenswrapper[4949]: E0120 14:50:39.788271 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:39 crc kubenswrapper[4949]: E0120 14:50:39.788354 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:39 crc kubenswrapper[4949]: E0120 14:50:39.788429 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.789926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.789965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.789976 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.789992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.790004 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893324 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893354 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996760 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996846 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.064718 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" event={"ID":"9957b569-5b87-4d8d-bec2-4a5d4a8b891c","Type":"ContainerStarted","Data":"1e7bdecc3873579562c8c437c317dd59fffbb519a2e670015c51309d4c87494e"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.068127 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099673 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099684 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203415 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.241936 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.242273 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.242243732 +0000 UTC m=+52.052074630 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306582 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306649 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.343115 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.343179 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.343217 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.343256 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343337 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343348 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343388 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343400 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343488 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343375 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343555 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343565 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343417 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.343397694 +0000 UTC m=+52.153228552 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343594 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.34358436 +0000 UTC m=+52.153415218 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343614 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.343605641 +0000 UTC m=+52.153436499 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343627 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.343620871 +0000 UTC m=+52.153451729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409857 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409885 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512128 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512331 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512347 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.611235 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hlfls"] Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.611763 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.611831 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615276 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.632845 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.651403 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.668044 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.685783 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.697757 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.710613 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717873 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717930 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.726477 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.739793 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:02:50.312867379 +0000 UTC Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.744202 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.747626 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.747743 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7s46\" (UniqueName: \"kubernetes.io/projected/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-kube-api-access-r7s46\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.759564 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.770908 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.784966 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.809014 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820668 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820677 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.824770 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.841584 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.848221 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.848313 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7s46\" (UniqueName: \"kubernetes.io/projected/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-kube-api-access-r7s46\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.848437 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.848534 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:41.34849543 +0000 UTC m=+37.158326298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.855146 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.866900 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7s46\" (UniqueName: \"kubernetes.io/projected/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-kube-api-access-r7s46\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.868840 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.895981 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.923830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.924054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.924149 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.924216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.924274 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.027843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.028113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.028348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.028623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.028847 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.077698 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" event={"ID":"9957b569-5b87-4d8d-bec2-4a5d4a8b891c","Type":"ContainerStarted","Data":"77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132096 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236733 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236761 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339867 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339954 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.353387 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.353491 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.353570 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:42.353554476 +0000 UTC m=+38.163385334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443332 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546739 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546767 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546785 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649728 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649841 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649900 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.741238 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:39:40.129714503 +0000 UTC Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752431 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752477 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752557 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.788936 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.788997 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.789085 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.789085 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.789197 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.789316 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856141 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856164 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.958979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.959028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.959044 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.959062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.959078 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061968 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.094216 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" event={"ID":"9957b569-5b87-4d8d-bec2-4a5d4a8b891c","Type":"ContainerStarted","Data":"efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.115620 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.135687 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.155277 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166732 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.172165 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.190915 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.207399 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.219417 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.233739 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.249842 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.268508 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269552 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.284019 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.297729 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.318135 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.336594 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.350885 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.361425 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.363828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:42 crc kubenswrapper[4949]: E0120 14:50:42.364129 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:42 crc kubenswrapper[4949]: E0120 14:50:42.364299 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:44.364268996 +0000 UTC m=+40.174099854 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371436 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.372709 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474236 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577258 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.742209 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:34:29.132851326 +0000 UTC Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786741 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786757 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786777 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786792 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.788354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:42 crc kubenswrapper[4949]: E0120 14:50:42.788637 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890327 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993684 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096674 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096696 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096713 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.202315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.203258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.203281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.203462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.203544 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306794 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306832 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410257 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.513936 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.514054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.514085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.514117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.514140 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616617 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616646 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719304 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.742894 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:11:07.789330887 +0000 UTC Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.788631 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.788721 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:43 crc kubenswrapper[4949]: E0120 14:50:43.788753 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.788634 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:43 crc kubenswrapper[4949]: E0120 14:50:43.788883 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:43 crc kubenswrapper[4949]: E0120 14:50:43.788976 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822314 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822340 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925930 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925960 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029095 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029198 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131724 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131743 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235420 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338633 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338709 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.387616 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:44 crc kubenswrapper[4949]: E0120 14:50:44.387808 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:44 crc kubenswrapper[4949]: E0120 14:50:44.387892 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:48.387867255 +0000 UTC m=+44.197698153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441944 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544512 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544534 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544558 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647835 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647865 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.743582 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:30:23.431328401 +0000 UTC Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750963 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.788276 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:44 crc kubenswrapper[4949]: E0120 14:50:44.788419 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.808606 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.828035 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.848121 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.857895 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.857962 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.857978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.858059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.858102 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.871611 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.885736 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.898407 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.917133 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.929998 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.954173 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961512 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961574 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.969636 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.992097 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.003977 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.018954 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.047702 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064492 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064507 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064532 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.067591 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.082167 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.094252 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167432 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270376 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270437 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374174 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477384 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477405 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.580974 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.581040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.581063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.581093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.581118 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683293 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683382 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.743809 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:14:52.727963133 +0000 UTC Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786478 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.788909 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.788918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.788940 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:45 crc kubenswrapper[4949]: E0120 14:50:45.789007 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:45 crc kubenswrapper[4949]: E0120 14:50:45.789109 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:45 crc kubenswrapper[4949]: E0120 14:50:45.789217 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889780 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889815 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889852 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889863 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992715 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992752 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095143 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095179 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198447 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198683 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302375 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302571 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404738 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507649 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.610985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.611070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.611105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.611135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.611156 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713338 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713420 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.744926 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:16:32.520740933 +0000 UTC Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.788475 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:46 crc kubenswrapper[4949]: E0120 14:50:46.788724 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816631 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920596 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920643 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920669 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023406 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023462 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.126920 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.126978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.126992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.127010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.127024 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.228957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.229006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.229021 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.229038 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.229050 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332048 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332146 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332199 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435630 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538733 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641785 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641810 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641841 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641865 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744298 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.745507 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:53:42.893962413 +0000 UTC Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.788864 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.788954 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.788965 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:47 crc kubenswrapper[4949]: E0120 14:50:47.789114 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:47 crc kubenswrapper[4949]: E0120 14:50:47.789257 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:47 crc kubenswrapper[4949]: E0120 14:50:47.789378 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.847877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.847963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.847986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.848013 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.848036 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951384 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054256 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157413 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157454 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222206 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.243335 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249422 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249506 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.268642 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276584 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276662 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.298417 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304542 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304588 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.322794 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.345937 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.346054 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348842 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348925 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.430386 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.430706 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.430833 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.430800319 +0000 UTC m=+52.240631207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.451876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.451944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.451967 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.452000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.452020 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554667 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656632 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656711 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.746254 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:37:58.469973743 +0000 UTC Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759736 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759760 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759813 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.788355 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.788639 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862696 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862728 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862748 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.965881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.966334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.966419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.966504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.966628 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069630 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173402 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276593 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276676 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380036 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380116 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483122 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483215 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483232 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586917 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689222 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.746727 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:04:53.222501832 +0000 UTC Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.788283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.788340 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.788488 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:49 crc kubenswrapper[4949]: E0120 14:50:49.788486 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:49 crc kubenswrapper[4949]: E0120 14:50:49.788712 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:49 crc kubenswrapper[4949]: E0120 14:50:49.789464 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.790041 4949 scope.go:117] "RemoveContainer" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792382 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792437 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896429 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896547 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999187 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999240 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101701 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101712 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.128052 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.131907 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.132305 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.153581 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.178344 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.196728 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204464 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.213068 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.237366 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.268137 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.286077 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.302886 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308174 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308190 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.316543 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.328943 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.338871 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.350813 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.364585 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.378587 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.389317 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.399310 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.410941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.410985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.410997 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.411014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.411027 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.423126 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513247 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513341 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615336 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615348 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717879 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717924 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717952 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.747452 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:36:22.689668057 +0000 UTC Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.788431 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:50 crc kubenswrapper[4949]: E0120 14:50:50.788633 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820613 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820622 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.922929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.922988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.923004 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.923026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.923043 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026203 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026693 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129346 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.137401 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/2.log" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.138687 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.142596 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" exitCode=1 Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.142651 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.142698 4949 scope.go:117] "RemoveContainer" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.143495 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:50:51 crc kubenswrapper[4949]: E0120 14:50:51.143927 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.191113 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.214222 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233242 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233314 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.240270 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.259226 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.275234 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.292897 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.312585 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.326729 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336249 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.339975 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.351308 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.364383 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.377819 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.394065 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.405135 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.417718 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.428673 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.438508 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439712 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543555 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.647872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.647932 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.647953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.647981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.648039 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.748682 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:09:02.616551918 +0000 UTC Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750716 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750729 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.788576 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:51 crc kubenswrapper[4949]: E0120 14:50:51.788697 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.788576 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.788760 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:51 crc kubenswrapper[4949]: E0120 14:50:51.788972 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:51 crc kubenswrapper[4949]: E0120 14:50:51.789225 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853405 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956693 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060314 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060427 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060448 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.151031 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/2.log" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.156811 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:50:52 crc kubenswrapper[4949]: E0120 14:50:52.157126 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162770 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162900 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.179981 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.199102 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.214378 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.229480 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.242866 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.260091 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265866 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265940 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.277136 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.294499 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.305992 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.320932 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.343900 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369699 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.372704 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.389856 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.410014 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.424419 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.440579 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.456171 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472885 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.503789 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.515160 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.530711 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.551020 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.565582 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575849 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575872 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.581754 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.593049 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.605238 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.623910 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.637588 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.652257 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.669547 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678356 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678384 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678396 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.688594 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.710341 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.725396 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.748969 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:33:27.960967303 +0000 UTC Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.749167 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.766378 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.780950 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.781029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.781051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.781075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.781092 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.788484 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:52 crc kubenswrapper[4949]: E0120 14:50:52.788748 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.790720 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.807087 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884323 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987309 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091087 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091120 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193694 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296366 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296406 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398593 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398664 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501980 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605328 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605446 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709185 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709381 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.749726 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:14:34.190966962 +0000 UTC Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.791871 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.791947 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:53 crc kubenswrapper[4949]: E0120 14:50:53.792099 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.792147 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:53 crc kubenswrapper[4949]: E0120 14:50:53.792282 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:53 crc kubenswrapper[4949]: E0120 14:50:53.792476 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812932 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.916986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.917022 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.917030 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.917043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.917052 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.019605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.019856 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.020000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.020120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.020227 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.122988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.123054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.123073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.123102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.123119 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.225719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.225996 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.226062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.226121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.226192 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329233 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431668 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431766 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.535229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.535987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.536006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.536029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.536045 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639443 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742891 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.750238 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:40:16.057542066 +0000 UTC Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.788554 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:54 crc kubenswrapper[4949]: E0120 14:50:54.788810 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.803621 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.821922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.842042 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848954 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.857138 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.877788 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.893885 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.909202 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.928071 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.946341 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951578 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951644 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.979215 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.996882 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.014950 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.030669 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.055072 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.055846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.056059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.056228 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.056369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.056487 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.086678 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.106659 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.128333 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.145661 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.160859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.161074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.161241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.161407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.161569 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.264978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.265046 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.265072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.265105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.265128 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368341 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368445 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368462 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471824 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.575364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.575733 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.575881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.576023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.576252 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.678832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.679316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.679486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.679796 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.679856 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.751196 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:11:21.651614726 +0000 UTC Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782868 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782959 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.788464 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.788492 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.788544 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:55 crc kubenswrapper[4949]: E0120 14:50:55.788617 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:55 crc kubenswrapper[4949]: E0120 14:50:55.788704 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:55 crc kubenswrapper[4949]: E0120 14:50:55.788813 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885896 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885936 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989179 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989207 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989225 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091778 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091858 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194163 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297279 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297364 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.316733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.316927 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.3168855 +0000 UTC m=+84.126716388 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400474 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.418304 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.418372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.418438 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.418481 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418673 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418698 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418716 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418775 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.418754775 +0000 UTC m=+84.228585663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418844 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418885 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.418872539 +0000 UTC m=+84.228703427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418919 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419039 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.419014673 +0000 UTC m=+84.228845601 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419064 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419100 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419119 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419209 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.419179588 +0000 UTC m=+84.229010476 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503660 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503826 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.518966 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.519200 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.519316 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:51:12.519282726 +0000 UTC m=+68.329113644 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607444 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607555 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607581 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710872 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.751613 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:02:04.33305525 +0000 UTC Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.789027 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.789206 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814866 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814882 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918678 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918747 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918782 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021174 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021666 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021770 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021866 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124195 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124291 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124305 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227558 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227594 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330293 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330306 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.433931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.434285 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.434556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.434794 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.435015 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.537987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.538038 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.538050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.538068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.538080 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640781 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744441 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744493 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.752063 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 00:26:40.996577561 +0000 UTC Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.788479 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.789071 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.789385 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:57 crc kubenswrapper[4949]: E0120 14:50:57.789365 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:57 crc kubenswrapper[4949]: E0120 14:50:57.789617 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:57 crc kubenswrapper[4949]: E0120 14:50:57.789774 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848143 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950786 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.054319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.054812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.054995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.055163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.055374 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.158940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.159008 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.159027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.159059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.159079 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.262899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.263208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.263592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.263969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.264162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367929 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470708 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574130 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574230 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576140 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.596724 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602513 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602559 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.627204 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.632946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.633015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.633034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.633059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.633078 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.654687 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660192 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.677907 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681966 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681990 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.701948 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.702240 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703944 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.753713 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:10:01.398985922 +0000 UTC Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.788288 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.788484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806721 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806806 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910146 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910158 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910176 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910187 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013094 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013129 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117645 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220242 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323751 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323769 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427234 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427339 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531574 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531643 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635304 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738691 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738779 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738830 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.754912 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:58:36.454111325 +0000 UTC Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.788386 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.788402 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.788508 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:59 crc kubenswrapper[4949]: E0120 14:50:59.788706 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:59 crc kubenswrapper[4949]: E0120 14:50:59.788863 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:59 crc kubenswrapper[4949]: E0120 14:50:59.789009 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842434 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842450 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944745 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048131 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048230 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048259 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048315 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151483 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254060 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254182 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254208 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361256 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464259 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464302 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567774 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670461 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.755116 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:18:41.128786152 +0000 UTC Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773811 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773862 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773891 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773903 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.788468 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:00 crc kubenswrapper[4949]: E0120 14:51:00.788608 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877121 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980820 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.083791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.083947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.083985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.084015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.084037 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186703 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186802 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186853 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290275 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290298 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290317 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393327 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393371 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496861 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599834 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.756176 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:23:46.188106121 +0000 UTC Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.788955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.788973 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.789094 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:01 crc kubenswrapper[4949]: E0120 14:51:01.789295 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:01 crc kubenswrapper[4949]: E0120 14:51:01.789452 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:01 crc kubenswrapper[4949]: E0120 14:51:01.789632 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.804992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.805058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.805082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.805111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.805134 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908744 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908789 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011782 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011841 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115477 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115705 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219338 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322219 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425128 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425167 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527071 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527136 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527144 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630827 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630911 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733721 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733780 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733807 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.756388 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:19:16.123502815 +0000 UTC Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.788244 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:02 crc kubenswrapper[4949]: E0120 14:51:02.788416 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837285 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837308 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940504 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043971 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043999 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147883 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147898 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251608 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353870 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456648 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559740 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662255 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662342 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.757463 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:11:15.346498217 +0000 UTC Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766049 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766164 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.787918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.787956 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.787924 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:03 crc kubenswrapper[4949]: E0120 14:51:03.788115 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:03 crc kubenswrapper[4949]: E0120 14:51:03.788237 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:03 crc kubenswrapper[4949]: E0120 14:51:03.788346 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868983 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972575 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972591 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075898 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.178848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.178922 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.178948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.178978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.179001 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.281966 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.282026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.282045 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.282068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.282087 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.488885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.488963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.488980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.489010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.489028 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592239 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695135 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.758085 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:19:09.406606407 +0000 UTC Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.787996 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:04 crc kubenswrapper[4949]: E0120 14:51:04.790252 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800844 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.811329 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.852142 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.877199 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.896990 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903793 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903833 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.915731 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.934776 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.955316 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.970710 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.989798 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.004926 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008129 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008149 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.027138 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.045258 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.078192 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110341 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110438 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110453 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.119625 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.140715 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.163860 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.178548 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.191964 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212820 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212870 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316793 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316856 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419710 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.522919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.522975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.522993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.523015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.523029 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625668 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728542 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728614 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728632 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728676 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.758419 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:03:16.223046793 +0000 UTC Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.788898 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.788937 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:05 crc kubenswrapper[4949]: E0120 14:51:05.789123 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.789210 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:05 crc kubenswrapper[4949]: E0120 14:51:05.789439 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:05 crc kubenswrapper[4949]: E0120 14:51:05.789542 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832882 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.937992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.938063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.938087 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.938116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.938142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041122 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041139 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144235 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144357 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144377 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247579 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247644 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350641 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453591 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556166 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659088 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659140 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659157 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659201 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.758930 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:45:13.835462301 +0000 UTC Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762370 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762422 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.788345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:06 crc kubenswrapper[4949]: E0120 14:51:06.788571 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.790019 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:51:06 crc kubenswrapper[4949]: E0120 14:51:06.790465 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.870775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.870892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.870923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.870983 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.871020 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.973838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.973980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.974003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.974027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.974044 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077546 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180864 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283427 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283465 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387388 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387447 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490909 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490957 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.593980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.594084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.594104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.594134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.594160 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698867 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.759452 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:51:26.6739738 +0000 UTC Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.788985 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:07 crc kubenswrapper[4949]: E0120 14:51:07.789228 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.789577 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:07 crc kubenswrapper[4949]: E0120 14:51:07.789684 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.789901 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:07 crc kubenswrapper[4949]: E0120 14:51:07.789986 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802739 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.906948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.907011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.907032 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.907061 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.907079 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009604 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.112929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.113011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.113034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.113064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.113087 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215631 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215669 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318694 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.421854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.421995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.422015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.422067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.422087 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.524919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.524975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.524988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.525002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.525013 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627454 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730955 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748403 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.760449 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:09:06.436177582 +0000 UTC Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.762448 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766735 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.780102 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783722 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.790616 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.790735 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.796715 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801747 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801782 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801796 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.817478 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826132 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826202 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.842811 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.843088 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844897 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844916 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844957 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947628 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947784 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051131 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051159 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154273 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154320 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256787 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256836 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256864 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359962 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462417 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564474 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666912 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666989 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.760859 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:47:39.321821647 +0000 UTC Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768865 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768970 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768987 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.788276 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.788289 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:09 crc kubenswrapper[4949]: E0120 14:51:09.788470 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.788298 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:09 crc kubenswrapper[4949]: E0120 14:51:09.788662 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:09 crc kubenswrapper[4949]: E0120 14:51:09.788861 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872496 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872513 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975880 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078870 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181279 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181336 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181362 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284299 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387224 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387278 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489596 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489614 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489640 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489657 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592297 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592329 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694711 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.761947 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:18:02.316334791 +0000 UTC Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.788855 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:10 crc kubenswrapper[4949]: E0120 14:51:10.789085 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796576 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796651 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899296 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899326 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002767 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002786 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.104933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.104972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.104983 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.105000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.105012 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207696 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207728 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310390 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310411 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310426 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413141 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413240 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516930 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619553 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.722880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.723127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.723139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.723155 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.723165 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.762391 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:40:18.615848017 +0000 UTC Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.788889 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.788889 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:11 crc kubenswrapper[4949]: E0120 14:51:11.789014 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.789208 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:11 crc kubenswrapper[4949]: E0120 14:51:11.789230 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:11 crc kubenswrapper[4949]: E0120 14:51:11.789398 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825894 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825967 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825978 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928427 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928438 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928468 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030429 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030454 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030503 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030553 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133347 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133445 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.235955 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.235998 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.236011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.236025 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.236035 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338301 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440789 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440861 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543565 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543624 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.593409 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:12 crc kubenswrapper[4949]: E0120 14:51:12.593635 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:51:12 crc kubenswrapper[4949]: E0120 14:51:12.593754 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:51:44.593726782 +0000 UTC m=+100.403557680 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646171 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646233 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646259 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748468 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748489 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.762982 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:57:28.209116993 +0000 UTC Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.788307 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:12 crc kubenswrapper[4949]: E0120 14:51:12.788459 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851416 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955426 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955442 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058057 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058152 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.160883 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.160943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.160965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.160991 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.161014 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.262893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.262956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.262973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.262995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.263014 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365624 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365902 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469631 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469699 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.571995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.572052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.572067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.572084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.572098 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674674 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674754 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674782 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.763246 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:43:30.96277594 +0000 UTC Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777438 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.788418 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:13 crc kubenswrapper[4949]: E0120 14:51:13.788506 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.788565 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.788599 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:13 crc kubenswrapper[4949]: E0120 14:51:13.788816 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:13 crc kubenswrapper[4949]: E0120 14:51:13.788863 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.801965 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896537 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000287 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103474 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103492 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206159 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206235 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206246 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308837 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.411819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.412125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.412295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.412446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.412623 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515329 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617539 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617618 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720677 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.764350 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:19:34.639935683 +0000 UTC Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.788097 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:14 crc kubenswrapper[4949]: E0120 14:51:14.788248 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.799666 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.808320 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.817901 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822733 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822766 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.827835 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.837186 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.848513 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.858248 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.869360 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.878698 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.889613 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.910945 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.923157 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924266 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.938053 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.949624 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.975728 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.991599 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.002897 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.019621 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027438 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.031338 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130645 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130736 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232376 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/0.log" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232419 4949 generic.go:334] "Generic (PLEG): container finished" podID="3ac16078-f295-4f4b-875c-a8505e87b9da" containerID="1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc" exitCode=1 Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232444 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerDied","Data":"1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232730 4949 scope.go:117] "RemoveContainer" containerID="1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232790 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232837 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.241767 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.253568 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.269997 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.294073 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.310463 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.325035 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335285 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335351 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335412 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.345656 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.359473 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.379087 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.389392 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.402252 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.411400 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.420296 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.429632 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439433 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439551 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439748 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.445451 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.455847 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.470223 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.482834 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.493912 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.542045 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.542310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.543105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.543205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.543291 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645609 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645799 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747708 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747737 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.765449 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:26:14.816067549 +0000 UTC Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.787951 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.788007 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:15 crc kubenswrapper[4949]: E0120 14:51:15.788136 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:15 crc kubenswrapper[4949]: E0120 14:51:15.788249 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.788508 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:15 crc kubenswrapper[4949]: E0120 14:51:15.788914 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850679 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850716 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952760 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055091 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157463 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157474 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157498 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.237329 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/0.log" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.237388 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259391 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259437 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259471 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.260024 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.270947 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.282020 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.308245 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.328317 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.348998 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361716 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361755 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.368568 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.384188 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.399772 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.412209 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.427849 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.447928 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.463497 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465422 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465451 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465486 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.475462 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.489057 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.504820 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.517629 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.530677 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.542308 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568579 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568607 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671291 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671302 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.766387 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:52:52.077377386 +0000 UTC Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.773578 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.773722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.773822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.773925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.774022 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.788823 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:16 crc kubenswrapper[4949]: E0120 14:51:16.788927 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876636 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876735 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979897 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082579 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082637 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184761 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184780 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184821 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287469 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287508 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287552 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287576 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287592 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390411 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390423 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493143 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493179 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493211 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596667 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699024 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699088 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699098 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.767311 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:58:17.0517292 +0000 UTC Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.788739 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:17 crc kubenswrapper[4949]: E0120 14:51:17.788906 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.789211 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:17 crc kubenswrapper[4949]: E0120 14:51:17.789305 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.789499 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:17 crc kubenswrapper[4949]: E0120 14:51:17.789643 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801903 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908275 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908318 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011833 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115613 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115705 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.218934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.219016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.219059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.219092 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.219111 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321785 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321852 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321890 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.424702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.424915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.424977 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.425037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.425098 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527387 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527431 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630653 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630691 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.733498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.733914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.734090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.734286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.734451 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.767889 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:27:32.209814616 +0000 UTC Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.788422 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:18 crc kubenswrapper[4949]: E0120 14:51:18.788666 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837439 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.929948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.930029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.930060 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.930092 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.930114 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: E0120 14:51:18.949687 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:18Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955333 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: E0120 14:51:18.974197 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:18Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.978846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.978926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.978952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.978984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.979008 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: E0120 14:51:18.996836 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:18Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001283 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001337 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001355 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001366 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.016343 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:19Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020585 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.035295 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:19Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.035542 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037745 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141468 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.246781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.246910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.247167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.247200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.247214 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351663 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456965 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.559993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.560051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.560067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.560090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.560107 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664128 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664169 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767715 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767756 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.768493 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:55:13.793438715 +0000 UTC Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.788251 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.788286 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.788891 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.788804 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.789264 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.789489 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.791925 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870684 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870734 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973228 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973551 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973574 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075605 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178251 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178277 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.252295 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/2.log" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.254045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.255443 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.277460 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280185 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280212 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.289240 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.301251 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.312191 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.321358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.331640 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.359565 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.369955 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382361 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.386415 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.401943 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.414786 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.427018 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.438654 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.453163 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.465759 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.477968 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484491 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484501 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.486781 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.503818 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.514786 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586803 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690275 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690298 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.769412 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:26:55.126418512 +0000 UTC Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.788017 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:20 crc kubenswrapper[4949]: E0120 14:51:20.788207 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792617 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792630 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895638 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895773 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.999605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.999657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.999675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.999697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:20.999714 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207641 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310376 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310388 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413507 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413553 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.516959 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.517006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.517017 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.517035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.517047 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620020 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620109 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723411 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.770489 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:09:43.274764286 +0000 UTC Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.788907 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.788959 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:21 crc kubenswrapper[4949]: E0120 14:51:21.789071 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:21 crc kubenswrapper[4949]: E0120 14:51:21.789269 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.788930 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:21 crc kubenswrapper[4949]: E0120 14:51:21.789870 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.826781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.827095 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.827256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.827442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.827683 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.930568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.930917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.931081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.931231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.931360 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.034597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.034972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.035118 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.035268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.035402 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138412 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.241898 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.241952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.241969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.241993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.242009 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.263152 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.264361 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/2.log" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.268828 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" exitCode=1 Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.268873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.268941 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.270783 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 14:51:22 crc kubenswrapper[4949]: E0120 14:51:22.271247 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.292505 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.317615 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.340125 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349495 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.360082 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.374865 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.388748 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.401844 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.416352 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.431067 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.447808 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452448 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.463974 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.481253 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.517962 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.533797 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.553354 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555477 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555513 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555593 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.566377 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.579127 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.599557 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.622503 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:21Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:51:20.574901 6972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:51:20.575105 6972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575313 6972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575352 6972 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 14:51:20.575378 6972 factory.go:656] Stopping watch factory\\\\nI0120 14:51:20.575326 6972 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575390 6972 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 14:51:20.623041 6972 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 14:51:20.623088 6972 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 14:51:20.623175 6972 ovnkube.go:599] Stopped ovnkube\\\\nI0120 14:51:20.623209 6972 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 14:51:20.623316 6972 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:51:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.658904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.659253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.659404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.659548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.659665 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762422 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762476 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.771881 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:22:26.329043535 +0000 UTC Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.788694 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:22 crc kubenswrapper[4949]: E0120 14:51:22.788885 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865774 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865856 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968885 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071763 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071793 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.175100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.175571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.175755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.175909 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.176043 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.275426 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.278494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.278800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.279202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.279607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.280089 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384142 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384206 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.486863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.486937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.486956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.486981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.487001 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590755 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694144 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.772732 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:10:17.964845169 +0000 UTC Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.788183 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.788313 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:23 crc kubenswrapper[4949]: E0120 14:51:23.788362 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.788183 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:23 crc kubenswrapper[4949]: E0120 14:51:23.788555 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:23 crc kubenswrapper[4949]: E0120 14:51:23.788715 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796802 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796843 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900296 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900329 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.003978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.004058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.004082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.004116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.004142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107414 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211219 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211231 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.314951 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.315006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.315019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.315039 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.315052 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417869 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417906 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521565 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521613 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624858 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624943 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727888 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727990 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.773657 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:59:42.208373702 +0000 UTC Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.788059 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:24 crc kubenswrapper[4949]: E0120 14:51:24.788291 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.803810 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.821748 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830909 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830950 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.842593 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.859306 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.878852 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.898114 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.921000 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935812 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.939637 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.957446 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.977495 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.999766 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.037204 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039912 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039924 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039959 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.048454 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.058632 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.075684 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:21Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:51:20.574901 6972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:51:20.575105 6972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575313 6972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575352 6972 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 14:51:20.575378 6972 factory.go:656] Stopping watch factory\\\\nI0120 14:51:20.575326 6972 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575390 6972 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 14:51:20.623041 6972 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 14:51:20.623088 6972 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 14:51:20.623175 6972 ovnkube.go:599] Stopped ovnkube\\\\nI0120 14:51:20.623209 6972 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 14:51:20.623316 6972 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:51:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.088093 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.100325 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.113430 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.126575 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142284 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.243874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.244109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.244127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.244160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.244171 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347206 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347230 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347239 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450355 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553132 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553145 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553171 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654792 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654869 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757436 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.774870 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:05:32.295996794 +0000 UTC Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.788473 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.788606 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.788488 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:25 crc kubenswrapper[4949]: E0120 14:51:25.788687 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:25 crc kubenswrapper[4949]: E0120 14:51:25.788768 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:25 crc kubenswrapper[4949]: E0120 14:51:25.788931 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860122 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860174 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963116 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065667 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168596 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168636 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168681 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270896 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270955 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270984 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374022 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374071 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374087 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374125 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482356 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482387 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585176 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585240 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585274 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687122 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687163 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.776091 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:24:14.200377102 +0000 UTC Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.788015 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:26 crc kubenswrapper[4949]: E0120 14:51:26.788129 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789148 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891473 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891557 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891572 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891581 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994195 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994278 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097124 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199508 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199607 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.302908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.302985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.303009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.303039 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.303060 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.405860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.406127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.406225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.406329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.406418 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510236 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.612937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.612987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.612997 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.613009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.613018 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716643 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716673 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716697 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.776596 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:26:55.79771905 +0000 UTC Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.787909 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:27 crc kubenswrapper[4949]: E0120 14:51:27.788086 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.788320 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.788485 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:27 crc kubenswrapper[4949]: E0120 14:51:27.788875 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:27 crc kubenswrapper[4949]: E0120 14:51:27.789037 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.819997 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.820054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.820073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.820095 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.820114 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929849 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929893 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032154 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032193 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135782 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135847 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238631 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238686 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341778 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341928 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.366679 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.366843 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.366816982 +0000 UTC m=+148.176647870 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445210 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445278 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445341 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.468068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.468116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.468163 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.468195 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468332 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468347 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468365 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468435 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468372 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468471 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.468444655 +0000 UTC m=+148.278275513 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468603 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468639 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.468610061 +0000 UTC m=+148.278440959 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468681 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.468663531 +0000 UTC m=+148.278494399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468371 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468707 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468734 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.468726133 +0000 UTC m=+148.278557001 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548020 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548132 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548151 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650979 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.753948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.754004 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.754018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.754039 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.754053 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.776811 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:21:58.113111151 +0000 UTC Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.788329 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.788514 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857053 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857129 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857171 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961092 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961223 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064370 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064577 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.167948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.168011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.168028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.168051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.168068 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271574 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271599 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330378 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330490 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.346505 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351374 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.368620 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372998 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.389925 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394437 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.413323 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.417589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.417809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.417947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.418116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.418273 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.437469 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.437674 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441473 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441492 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441506 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544869 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544890 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544922 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544941 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648895 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648996 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752370 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752387 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752430 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.776974 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:54:22.86653812 +0000 UTC Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.788750 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.788940 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.789272 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.789354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.789724 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.789754 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855157 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855219 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958388 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958406 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060643 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163660 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163674 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163694 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163708 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266960 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.369942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.370002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.370019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.370043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.370062 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474007 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474071 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474138 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576964 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576987 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680883 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680957 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.777155 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:53:46.382270376 +0000 UTC Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783357 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783433 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.789070 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:30 crc kubenswrapper[4949]: E0120 14:51:30.789339 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.885979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.886038 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.886055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.886076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.886093 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988475 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091041 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091103 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091113 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194409 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297078 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297138 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.400001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.400369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.400702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.400927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.401110 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504456 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608980 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712107 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.777894 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:47:08.768420384 +0000 UTC Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.788816 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.788861 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:31 crc kubenswrapper[4949]: E0120 14:51:31.789038 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:31 crc kubenswrapper[4949]: E0120 14:51:31.789185 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.789748 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:31 crc kubenswrapper[4949]: E0120 14:51:31.789941 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815198 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918586 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918792 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918996 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.022399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.022710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.022927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.023127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.023357 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127170 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127227 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.229874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.229957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.229984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.230010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.230028 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333036 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333140 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333157 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.435690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.436353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.436510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.436693 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.436850 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540234 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540282 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642743 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746503 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746565 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746620 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.778120 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:32:22.260740702 +0000 UTC Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.789355 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:32 crc kubenswrapper[4949]: E0120 14:51:32.789628 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.850250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.851098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.851326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.851557 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.851760 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954162 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954207 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057790 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057851 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161119 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161250 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264505 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366966 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366978 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470232 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470380 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.573957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.574001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.574013 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.574028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.574039 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678143 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.779832 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:04:39.853483531 +0000 UTC Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781793 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.789676 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.789790 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:33 crc kubenswrapper[4949]: E0120 14:51:33.789945 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.790223 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:33 crc kubenswrapper[4949]: E0120 14:51:33.790316 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:33 crc kubenswrapper[4949]: E0120 14:51:33.790631 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.885256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.886041 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.886229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.886481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.886709 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989981 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092586 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092681 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195126 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297793 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297807 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400335 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503057 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503196 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605950 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605960 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709505 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709596 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.781343 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:05:47.089556143 +0000 UTC Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.788777 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:34 crc kubenswrapper[4949]: E0120 14:51:34.789019 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812313 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.837127 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.837111872 podStartE2EDuration="21.837111872s" podCreationTimestamp="2026-01-20 14:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.836824562 +0000 UTC m=+90.646655420" watchObservedRunningTime="2026-01-20 14:51:34.837111872 +0000 UTC m=+90.646942730" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.861201 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.86116083 podStartE2EDuration="1m11.86116083s" podCreationTimestamp="2026-01-20 14:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.860574451 +0000 UTC m=+90.670405319" watchObservedRunningTime="2026-01-20 14:51:34.86116083 +0000 UTC m=+90.670991698" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.914900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.914954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.914969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.914989 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.915002 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.915750 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gnfmv" podStartSLOduration=69.91572757 podStartE2EDuration="1m9.91572757s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.91421595 +0000 UTC m=+90.724046848" watchObservedRunningTime="2026-01-20 14:51:34.91572757 +0000 UTC m=+90.725558448" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.930170 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podStartSLOduration=68.930146922 podStartE2EDuration="1m8.930146922s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.929313966 +0000 UTC m=+90.739144824" watchObservedRunningTime="2026-01-20 14:51:34.930146922 +0000 UTC m=+90.739977790" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.949206 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2szcd" podStartSLOduration=68.949187998 podStartE2EDuration="1m8.949187998s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.946239721 +0000 UTC m=+90.756070589" watchObservedRunningTime="2026-01-20 14:51:34.949187998 +0000 UTC m=+90.759018856" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.973426 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.973405781 podStartE2EDuration="1m6.973405781s" podCreationTimestamp="2026-01-20 14:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.972863283 +0000 UTC m=+90.782694141" watchObservedRunningTime="2026-01-20 14:51:34.973405781 +0000 UTC m=+90.783236649" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017331 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017416 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.024386 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hzkk7" podStartSLOduration=70.024364013 podStartE2EDuration="1m10.024364013s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.013781786 +0000 UTC m=+90.823612644" watchObservedRunningTime="2026-01-20 14:51:35.024364013 +0000 UTC m=+90.834194881" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.024982 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" podStartSLOduration=69.024973813 podStartE2EDuration="1m9.024973813s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.024323302 +0000 UTC m=+90.834154170" watchObservedRunningTime="2026-01-20 14:51:35.024973813 +0000 UTC m=+90.834804671" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.062562 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.062542205 podStartE2EDuration="1m11.062542205s" podCreationTimestamp="2026-01-20 14:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.049549099 +0000 UTC m=+90.859379957" watchObservedRunningTime="2026-01-20 14:51:35.062542205 +0000 UTC m=+90.872373063" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.079889 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" podStartSLOduration=69.079873143 podStartE2EDuration="1m9.079873143s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.079459029 +0000 UTC m=+90.889289897" watchObservedRunningTime="2026-01-20 14:51:35.079873143 +0000 UTC m=+90.889704001" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.091473 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.091452353 podStartE2EDuration="43.091452353s" podCreationTimestamp="2026-01-20 14:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.091409872 +0000 UTC m=+90.901240730" watchObservedRunningTime="2026-01-20 14:51:35.091452353 +0000 UTC m=+90.901283221" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119726 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119749 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119765 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222792 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222808 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324582 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324590 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427222 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531581 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531603 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634897 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634942 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.737850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.737915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.737950 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.737989 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.738013 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.782475 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:59:16.139141243 +0000 UTC Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.788852 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.788952 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.788866 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:35 crc kubenswrapper[4949]: E0120 14:51:35.789032 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:35 crc kubenswrapper[4949]: E0120 14:51:35.789197 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:35 crc kubenswrapper[4949]: E0120 14:51:35.789321 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.790395 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 14:51:35 crc kubenswrapper[4949]: E0120 14:51:35.790757 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.844806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.845205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.845462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.845776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.846069 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949469 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051845 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154593 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257790 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257805 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.365105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.365984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.366008 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.366029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.366042 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.469687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.470033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.470198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.470346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.470498 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573297 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573363 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.676470 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.676908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.677066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.677183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.677325 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782759 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782777 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.783480 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:31:22.359572997 +0000 UTC Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.789121 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:36 crc kubenswrapper[4949]: E0120 14:51:36.789406 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886148 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886255 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989232 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989316 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091638 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091653 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091684 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194875 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194887 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298444 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298509 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401205 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503955 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503974 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503988 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606604 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606713 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606761 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709416 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.784634 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:12:34.24476391 +0000 UTC Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.787940 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.787994 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.788012 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:37 crc kubenswrapper[4949]: E0120 14:51:37.788092 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:37 crc kubenswrapper[4949]: E0120 14:51:37.788242 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:37 crc kubenswrapper[4949]: E0120 14:51:37.788309 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812013 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812094 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812132 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915429 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915443 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017901 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120234 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248811 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248873 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248910 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352347 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455323 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557312 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557434 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557457 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.660850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.660927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.660952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.660988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.661014 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763836 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763884 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.785623 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:57:57.520568972 +0000 UTC Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.787912 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:38 crc kubenswrapper[4949]: E0120 14:51:38.788062 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867122 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969473 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969512 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072552 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175888 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279740 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389726 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.492940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.492978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.492988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.493007 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.493024 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529247 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529274 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.575498 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2"] Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.575856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.577865 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.578088 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.578253 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.578418 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593186 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5131c40-5bac-4c6a-b498-95560669483a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5131c40-5bac-4c6a-b498-95560669483a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593285 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5131c40-5bac-4c6a-b498-95560669483a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593315 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593378 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694662 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694748 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5131c40-5bac-4c6a-b498-95560669483a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694789 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5131c40-5bac-4c6a-b498-95560669483a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694801 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694882 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5131c40-5bac-4c6a-b498-95560669483a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.695089 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.696932 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5131c40-5bac-4c6a-b498-95560669483a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.708269 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5131c40-5bac-4c6a-b498-95560669483a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.711718 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5131c40-5bac-4c6a-b498-95560669483a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.786784 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:20:33.24828272 +0000 UTC Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.786860 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.787985 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:39 crc kubenswrapper[4949]: E0120 14:51:39.788083 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.788250 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:39 crc kubenswrapper[4949]: E0120 14:51:39.788302 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.788407 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:39 crc kubenswrapper[4949]: E0120 14:51:39.788451 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.794501 4949 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.890112 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:40 crc kubenswrapper[4949]: I0120 14:51:40.337684 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" event={"ID":"f5131c40-5bac-4c6a-b498-95560669483a","Type":"ContainerStarted","Data":"0524b07f4739d1105af8cc7d9c2806b90304508d9b18cc3d00b496381d0f09af"} Jan 20 14:51:40 crc kubenswrapper[4949]: I0120 14:51:40.337754 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" event={"ID":"f5131c40-5bac-4c6a-b498-95560669483a","Type":"ContainerStarted","Data":"628a7ae05a3a0834d9c35c526003957c1142c4942f8ad33c2a1d3d39e887ec68"} Jan 20 14:51:40 crc kubenswrapper[4949]: I0120 14:51:40.359752 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" podStartSLOduration=75.35972937 podStartE2EDuration="1m15.35972937s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:40.358890753 +0000 UTC m=+96.168721611" watchObservedRunningTime="2026-01-20 14:51:40.35972937 +0000 UTC m=+96.169560268" Jan 20 14:51:40 crc kubenswrapper[4949]: I0120 14:51:40.788991 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:40 crc kubenswrapper[4949]: E0120 14:51:40.789483 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:41 crc kubenswrapper[4949]: I0120 14:51:41.788576 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:41 crc kubenswrapper[4949]: I0120 14:51:41.788675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:41 crc kubenswrapper[4949]: I0120 14:51:41.788675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:41 crc kubenswrapper[4949]: E0120 14:51:41.788849 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:41 crc kubenswrapper[4949]: E0120 14:51:41.789114 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:41 crc kubenswrapper[4949]: E0120 14:51:41.789299 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:42 crc kubenswrapper[4949]: I0120 14:51:42.788740 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:42 crc kubenswrapper[4949]: E0120 14:51:42.788898 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:43 crc kubenswrapper[4949]: I0120 14:51:43.788872 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:43 crc kubenswrapper[4949]: I0120 14:51:43.788988 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:43 crc kubenswrapper[4949]: E0120 14:51:43.789037 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:43 crc kubenswrapper[4949]: E0120 14:51:43.789171 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:43 crc kubenswrapper[4949]: I0120 14:51:43.789253 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:43 crc kubenswrapper[4949]: E0120 14:51:43.789338 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:44 crc kubenswrapper[4949]: I0120 14:51:44.644837 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:44 crc kubenswrapper[4949]: E0120 14:51:44.645155 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:51:44 crc kubenswrapper[4949]: E0120 14:51:44.645279 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:52:48.645246797 +0000 UTC m=+164.455077715 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:51:44 crc kubenswrapper[4949]: I0120 14:51:44.788287 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:44 crc kubenswrapper[4949]: E0120 14:51:44.790732 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:45 crc kubenswrapper[4949]: I0120 14:51:45.788359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:45 crc kubenswrapper[4949]: I0120 14:51:45.788438 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:45 crc kubenswrapper[4949]: E0120 14:51:45.788484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:45 crc kubenswrapper[4949]: I0120 14:51:45.788502 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:45 crc kubenswrapper[4949]: E0120 14:51:45.788592 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:45 crc kubenswrapper[4949]: E0120 14:51:45.788690 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:46 crc kubenswrapper[4949]: I0120 14:51:46.788314 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:46 crc kubenswrapper[4949]: E0120 14:51:46.788570 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:47 crc kubenswrapper[4949]: I0120 14:51:47.788675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:47 crc kubenswrapper[4949]: I0120 14:51:47.788803 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:47 crc kubenswrapper[4949]: I0120 14:51:47.789656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:47 crc kubenswrapper[4949]: E0120 14:51:47.789868 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:47 crc kubenswrapper[4949]: E0120 14:51:47.790163 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:47 crc kubenswrapper[4949]: E0120 14:51:47.790226 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:48 crc kubenswrapper[4949]: I0120 14:51:48.788829 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:48 crc kubenswrapper[4949]: E0120 14:51:48.789043 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:48 crc kubenswrapper[4949]: I0120 14:51:48.790472 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 14:51:48 crc kubenswrapper[4949]: E0120 14:51:48.790812 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:51:49 crc kubenswrapper[4949]: I0120 14:51:49.788810 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:49 crc kubenswrapper[4949]: I0120 14:51:49.788885 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:49 crc kubenswrapper[4949]: I0120 14:51:49.788834 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:49 crc kubenswrapper[4949]: E0120 14:51:49.789994 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:49 crc kubenswrapper[4949]: E0120 14:51:49.790388 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:49 crc kubenswrapper[4949]: E0120 14:51:49.790717 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:50 crc kubenswrapper[4949]: I0120 14:51:50.789061 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:50 crc kubenswrapper[4949]: E0120 14:51:50.789467 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:51 crc kubenswrapper[4949]: I0120 14:51:51.788185 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:51 crc kubenswrapper[4949]: I0120 14:51:51.788233 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:51 crc kubenswrapper[4949]: I0120 14:51:51.788225 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:51 crc kubenswrapper[4949]: E0120 14:51:51.788703 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:51 crc kubenswrapper[4949]: E0120 14:51:51.789092 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:51 crc kubenswrapper[4949]: E0120 14:51:51.789212 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:52 crc kubenswrapper[4949]: I0120 14:51:52.788992 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:52 crc kubenswrapper[4949]: E0120 14:51:52.789233 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:53 crc kubenswrapper[4949]: I0120 14:51:53.788188 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:53 crc kubenswrapper[4949]: I0120 14:51:53.788219 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:53 crc kubenswrapper[4949]: E0120 14:51:53.788389 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:53 crc kubenswrapper[4949]: E0120 14:51:53.788557 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:53 crc kubenswrapper[4949]: I0120 14:51:53.788218 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:53 crc kubenswrapper[4949]: E0120 14:51:53.789089 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:54 crc kubenswrapper[4949]: I0120 14:51:54.787989 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:54 crc kubenswrapper[4949]: E0120 14:51:54.790157 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:55 crc kubenswrapper[4949]: I0120 14:51:55.787853 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:55 crc kubenswrapper[4949]: I0120 14:51:55.787925 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:55 crc kubenswrapper[4949]: I0120 14:51:55.788059 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:55 crc kubenswrapper[4949]: E0120 14:51:55.788202 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:55 crc kubenswrapper[4949]: E0120 14:51:55.789261 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:55 crc kubenswrapper[4949]: E0120 14:51:55.789380 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:56 crc kubenswrapper[4949]: I0120 14:51:56.788627 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:56 crc kubenswrapper[4949]: E0120 14:51:56.788862 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:57 crc kubenswrapper[4949]: I0120 14:51:57.788665 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:57 crc kubenswrapper[4949]: I0120 14:51:57.788704 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:57 crc kubenswrapper[4949]: E0120 14:51:57.788822 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:57 crc kubenswrapper[4949]: I0120 14:51:57.788867 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:57 crc kubenswrapper[4949]: E0120 14:51:57.789045 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:57 crc kubenswrapper[4949]: E0120 14:51:57.789228 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:58 crc kubenswrapper[4949]: I0120 14:51:58.788800 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:58 crc kubenswrapper[4949]: E0120 14:51:58.788976 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:59 crc kubenswrapper[4949]: I0120 14:51:59.788612 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:59 crc kubenswrapper[4949]: I0120 14:51:59.788693 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:59 crc kubenswrapper[4949]: E0120 14:51:59.788975 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:59 crc kubenswrapper[4949]: E0120 14:51:59.789076 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:59 crc kubenswrapper[4949]: I0120 14:51:59.788715 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:59 crc kubenswrapper[4949]: E0120 14:51:59.789851 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:00 crc kubenswrapper[4949]: I0120 14:52:00.789054 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:00 crc kubenswrapper[4949]: E0120 14:52:00.789243 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.418071 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/1.log" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.418843 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/0.log" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.419048 4949 generic.go:334] "Generic (PLEG): container finished" podID="3ac16078-f295-4f4b-875c-a8505e87b9da" containerID="2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1" exitCode=1 Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.419150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerDied","Data":"2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1"} Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.419621 4949 scope.go:117] "RemoveContainer" containerID="1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.420579 4949 scope.go:117] "RemoveContainer" containerID="2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1" Jan 20 14:52:01 crc kubenswrapper[4949]: E0120 14:52:01.421007 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2szcd_openshift-multus(3ac16078-f295-4f4b-875c-a8505e87b9da)\"" pod="openshift-multus/multus-2szcd" podUID="3ac16078-f295-4f4b-875c-a8505e87b9da" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.788913 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.789011 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.789035 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:01 crc kubenswrapper[4949]: E0120 14:52:01.789176 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:01 crc kubenswrapper[4949]: E0120 14:52:01.789280 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:01 crc kubenswrapper[4949]: E0120 14:52:01.789430 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:02 crc kubenswrapper[4949]: I0120 14:52:02.425273 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/1.log" Jan 20 14:52:02 crc kubenswrapper[4949]: I0120 14:52:02.788135 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:02 crc kubenswrapper[4949]: E0120 14:52:02.788273 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:02 crc kubenswrapper[4949]: I0120 14:52:02.789553 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.430817 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.433336 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.433776 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.639176 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podStartSLOduration=97.639143568 podStartE2EDuration="1m37.639143568s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:03.466109632 +0000 UTC m=+119.275940510" watchObservedRunningTime="2026-01-20 14:52:03.639143568 +0000 UTC m=+119.448974476" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.639702 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hlfls"] Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.639876 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:03 crc kubenswrapper[4949]: E0120 14:52:03.640108 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.788856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.788988 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.788856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:03 crc kubenswrapper[4949]: E0120 14:52:03.789072 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:03 crc kubenswrapper[4949]: E0120 14:52:03.789230 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:03 crc kubenswrapper[4949]: E0120 14:52:03.789441 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:04 crc kubenswrapper[4949]: E0120 14:52:04.774599 4949 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 20 14:52:04 crc kubenswrapper[4949]: I0120 14:52:04.788851 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:04 crc kubenswrapper[4949]: E0120 14:52:04.790866 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:04 crc kubenswrapper[4949]: E0120 14:52:04.880295 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 14:52:05 crc kubenswrapper[4949]: I0120 14:52:05.788955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:05 crc kubenswrapper[4949]: I0120 14:52:05.789013 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:05 crc kubenswrapper[4949]: E0120 14:52:05.789709 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:05 crc kubenswrapper[4949]: I0120 14:52:05.789013 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:05 crc kubenswrapper[4949]: E0120 14:52:05.789858 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:05 crc kubenswrapper[4949]: E0120 14:52:05.789935 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:06 crc kubenswrapper[4949]: I0120 14:52:06.789085 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:06 crc kubenswrapper[4949]: E0120 14:52:06.789271 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:07 crc kubenswrapper[4949]: I0120 14:52:07.788410 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:07 crc kubenswrapper[4949]: E0120 14:52:07.788566 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:07 crc kubenswrapper[4949]: I0120 14:52:07.788663 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:07 crc kubenswrapper[4949]: I0120 14:52:07.788685 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:07 crc kubenswrapper[4949]: E0120 14:52:07.788848 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:07 crc kubenswrapper[4949]: E0120 14:52:07.788997 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:08 crc kubenswrapper[4949]: I0120 14:52:08.788335 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:08 crc kubenswrapper[4949]: E0120 14:52:08.788504 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:09 crc kubenswrapper[4949]: I0120 14:52:09.788241 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:09 crc kubenswrapper[4949]: I0120 14:52:09.788379 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:09 crc kubenswrapper[4949]: E0120 14:52:09.788612 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:09 crc kubenswrapper[4949]: I0120 14:52:09.788682 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:09 crc kubenswrapper[4949]: E0120 14:52:09.788751 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:09 crc kubenswrapper[4949]: E0120 14:52:09.788382 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:09 crc kubenswrapper[4949]: E0120 14:52:09.881400 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 14:52:10 crc kubenswrapper[4949]: I0120 14:52:10.788283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:10 crc kubenswrapper[4949]: E0120 14:52:10.788404 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:11 crc kubenswrapper[4949]: I0120 14:52:11.788388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:11 crc kubenswrapper[4949]: I0120 14:52:11.788388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:11 crc kubenswrapper[4949]: E0120 14:52:11.788603 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:11 crc kubenswrapper[4949]: I0120 14:52:11.788421 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:11 crc kubenswrapper[4949]: E0120 14:52:11.788768 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:11 crc kubenswrapper[4949]: E0120 14:52:11.788832 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:12 crc kubenswrapper[4949]: I0120 14:52:12.789119 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:12 crc kubenswrapper[4949]: E0120 14:52:12.789331 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:13 crc kubenswrapper[4949]: I0120 14:52:13.788510 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:13 crc kubenswrapper[4949]: I0120 14:52:13.788608 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:13 crc kubenswrapper[4949]: I0120 14:52:13.788653 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:13 crc kubenswrapper[4949]: E0120 14:52:13.788777 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:13 crc kubenswrapper[4949]: E0120 14:52:13.788872 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:13 crc kubenswrapper[4949]: E0120 14:52:13.789000 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:14 crc kubenswrapper[4949]: I0120 14:52:14.790882 4949 scope.go:117] "RemoveContainer" containerID="2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1" Jan 20 14:52:14 crc kubenswrapper[4949]: I0120 14:52:14.794117 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:14 crc kubenswrapper[4949]: E0120 14:52:14.794489 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:14 crc kubenswrapper[4949]: E0120 14:52:14.882632 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.500448 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/1.log" Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.500558 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470"} Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.788457 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.788472 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:15 crc kubenswrapper[4949]: E0120 14:52:15.788687 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.788472 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:15 crc kubenswrapper[4949]: E0120 14:52:15.788814 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:15 crc kubenswrapper[4949]: E0120 14:52:15.788867 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:16 crc kubenswrapper[4949]: I0120 14:52:16.788212 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:16 crc kubenswrapper[4949]: E0120 14:52:16.788410 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:17 crc kubenswrapper[4949]: I0120 14:52:17.788550 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:17 crc kubenswrapper[4949]: I0120 14:52:17.788606 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:17 crc kubenswrapper[4949]: E0120 14:52:17.788692 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:17 crc kubenswrapper[4949]: I0120 14:52:17.788770 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:17 crc kubenswrapper[4949]: E0120 14:52:17.788861 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:17 crc kubenswrapper[4949]: E0120 14:52:17.789018 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:18 crc kubenswrapper[4949]: I0120 14:52:18.788887 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:18 crc kubenswrapper[4949]: E0120 14:52:18.789102 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:19 crc kubenswrapper[4949]: I0120 14:52:19.788283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:19 crc kubenswrapper[4949]: I0120 14:52:19.788299 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:19 crc kubenswrapper[4949]: E0120 14:52:19.788547 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:19 crc kubenswrapper[4949]: E0120 14:52:19.788606 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:19 crc kubenswrapper[4949]: I0120 14:52:19.788892 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:19 crc kubenswrapper[4949]: E0120 14:52:19.789156 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.191369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.230276 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.231076 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.231443 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tsmsl"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.231868 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.232310 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.232928 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.233209 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.233720 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.236339 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4xbn"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.236797 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r9kf7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.237266 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.237835 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.238411 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.238891 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.252995 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.254567 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266801 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmgqx\" (UniqueName: \"kubernetes.io/projected/7f69495e-a17d-4493-b598-99c2fc9afee7-kube-api-access-nmgqx\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266833 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266860 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-machine-approver-tls\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266885 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-config\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266912 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqn7b\" (UniqueName: \"kubernetes.io/projected/cc07a381-955f-47a2-89ab-59985f08e602-kube-api-access-tqn7b\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266941 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-node-pullsecrets\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266965 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-client\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267002 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-image-import-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267026 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267048 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267075 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f69495e-a17d-4493-b598-99c2fc9afee7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267111 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267146 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267178 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267199 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267227 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-images\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267255 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267283 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267306 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkst\" (UniqueName: \"kubernetes.io/projected/1c433a7c-ae2d-4320-b456-58b37bdd5f22-kube-api-access-sjkst\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267333 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267371 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc07a381-955f-47a2-89ab-59985f08e602-serving-cert\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267397 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267422 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267451 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdg6v\" (UniqueName: \"kubernetes.io/projected/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-kube-api-access-gdg6v\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267500 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-encryption-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267553 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-serving-cert\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267596 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-serving-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267731 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4949c\" (UniqueName: \"kubernetes.io/projected/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-kube-api-access-4949c\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-config\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267952 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit-dir\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.268006 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.268113 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-auth-proxy-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.271014 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.271678 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.271966 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.272435 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.272742 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.272785 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.273802 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.283656 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.283919 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284059 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284405 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284570 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284674 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284730 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284826 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284929 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285044 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285118 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285195 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285251 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284615 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285374 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285396 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285474 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285490 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285556 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285640 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285123 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285325 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285345 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285323 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285857 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285972 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286110 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286188 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286275 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286405 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286628 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286414 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286963 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287149 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287333 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287551 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287744 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287945 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.288181 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.288421 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.288674 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.290690 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bb9s9"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.293785 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mlc47"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.294249 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285492 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.295360 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.292561 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.298561 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300058 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300149 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300270 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300409 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300062 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300955 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.301092 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.301373 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.301631 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m8sd9"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.301681 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.306405 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.307129 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.307813 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.307976 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.308404 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.309069 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.309148 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.309206 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.310315 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.310474 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.310729 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.312762 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.314595 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.314632 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.314698 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316007 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316099 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316176 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316401 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316472 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316406 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316563 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316629 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316703 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317153 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317458 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kncwj"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317744 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r9dfg"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317808 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317982 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.319817 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320130 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320697 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320785 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320790 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320803 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320941 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.323084 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.329458 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.333177 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.333695 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.334044 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.334532 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.334696 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.336585 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.336721 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.336862 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.336990 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.337116 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.338063 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.338307 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.338472 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339006 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339066 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339129 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339217 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339279 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339641 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.340265 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.340430 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.340821 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.341014 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.345824 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.347977 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.349181 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.351210 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.352354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.378266 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.379720 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.381634 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.385420 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.385846 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.386680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.386869 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387569 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387623 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4949c\" (UniqueName: \"kubernetes.io/projected/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-kube-api-access-4949c\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387769 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387804 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-config\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387829 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit-dir\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387853 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387890 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-auth-proxy-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387919 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387942 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmgqx\" (UniqueName: \"kubernetes.io/projected/7f69495e-a17d-4493-b598-99c2fc9afee7-kube-api-access-nmgqx\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387965 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-machine-approver-tls\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-config\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388029 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqn7b\" (UniqueName: \"kubernetes.io/projected/cc07a381-955f-47a2-89ab-59985f08e602-kube-api-access-tqn7b\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388052 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-node-pullsecrets\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-client\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-image-import-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388125 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388150 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388173 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f69495e-a17d-4493-b598-99c2fc9afee7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388198 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388218 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388238 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388258 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388279 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-images\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388299 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388320 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388343 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjkst\" (UniqueName: \"kubernetes.io/projected/1c433a7c-ae2d-4320-b456-58b37bdd5f22-kube-api-access-sjkst\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388366 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388398 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc07a381-955f-47a2-89ab-59985f08e602-serving-cert\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388402 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-node-pullsecrets\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388420 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388441 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388470 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdg6v\" (UniqueName: \"kubernetes.io/projected/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-kube-api-access-gdg6v\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388492 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388541 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-encryption-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388578 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-serving-cert\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388604 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-serving-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.391269 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.391474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-serving-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.391488 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.393909 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qhn47"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.394681 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tsmsl"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.394715 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.394846 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395004 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395239 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395600 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit-dir\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.385858 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395899 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.396011 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.396143 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.396349 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.398494 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.398656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.396372 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-images\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.399058 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-auth-proxy-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.399601 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.399808 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.400329 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.400382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-config\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387461 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.400786 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387534 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.411918 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.407457 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-image-import-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.414645 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49lg4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.414846 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415055 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415143 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-client\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415270 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415581 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-27qdj"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.407724 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc07a381-955f-47a2-89ab-59985f08e602-serving-cert\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415992 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.416176 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f69495e-a17d-4493-b598-99c2fc9afee7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.416530 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-encryption-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.417212 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.417436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.417804 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-machine-approver-tls\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.417852 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.418382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.419131 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-config\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.421866 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.422481 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.422585 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.423004 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.423061 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.423407 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zvfr4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.423497 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.424199 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.424318 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.424628 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.430170 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r9kf7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.430249 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.433554 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.433669 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.433861 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.436778 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8mlj4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.437903 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.438023 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.439950 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mlc47"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.460268 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.460321 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.460344 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bb9s9"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.466730 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.472936 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.475958 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.478979 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49lg4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.479450 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.486848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490357 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490396 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490423 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4787cfd3-62d3-494b-94c9-e01ff459c73b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490441 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490467 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnvjf\" (UniqueName: \"kubernetes.io/projected/4787cfd3-62d3-494b-94c9-e01ff459c73b-kube-api-access-mnvjf\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/056f1862-446a-4aa9-9a9f-f09463c32dab-trusted-ca\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490537 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8md\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-kube-api-access-rc8md\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490554 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-encryption-config\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490582 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/056f1862-446a-4aa9-9a9f-f09463c32dab-metrics-tls\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490600 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-policies\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490616 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-serving-cert\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490634 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6b5b\" (UniqueName: \"kubernetes.io/projected/113494fa-baf7-4f60-9a9c-e8c8d6abb146-kube-api-access-v6b5b\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490655 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490673 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-client\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490730 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d68754d4-260b-460e-a34e-3d4a7313e4eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490764 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-dir\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490794 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68754d4-260b-460e-a34e-3d4a7313e4eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68754d4-260b-460e-a34e-3d4a7313e4eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.491165 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4787cfd3-62d3-494b-94c9-e01ff459c73b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.491183 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.493569 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.493626 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.493643 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m8sd9"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.494832 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.495693 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4xbn"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.497098 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.497696 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.498970 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.499656 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.500705 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r9dfg"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.501805 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-27qdj"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.502879 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.503848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.504844 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.505821 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.506804 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-j8fgh"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.507985 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.508246 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.509163 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pcdvd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.510233 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.510286 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.511346 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.512260 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zvfr4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.513205 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qhn47"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.514211 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.515242 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.516313 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pcdvd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.517261 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.518243 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j8fgh"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.519202 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lvqj5"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.520551 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lvqj5"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.520666 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.523988 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.538245 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.554288 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.573920 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.591953 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68754d4-260b-460e-a34e-3d4a7313e4eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592008 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4787cfd3-62d3-494b-94c9-e01ff459c73b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592040 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592128 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592165 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4787cfd3-62d3-494b-94c9-e01ff459c73b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592191 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592761 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnvjf\" (UniqueName: \"kubernetes.io/projected/4787cfd3-62d3-494b-94c9-e01ff459c73b-kube-api-access-mnvjf\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592801 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/056f1862-446a-4aa9-9a9f-f09463c32dab-trusted-ca\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.593387 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594013 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594381 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-encryption-config\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594602 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8md\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-kube-api-access-rc8md\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/056f1862-446a-4aa9-9a9f-f09463c32dab-metrics-tls\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594834 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-policies\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594957 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.595592 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/056f1862-446a-4aa9-9a9f-f09463c32dab-trusted-ca\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.595866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-policies\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594902 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-serving-cert\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596213 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6b5b\" (UniqueName: \"kubernetes.io/projected/113494fa-baf7-4f60-9a9c-e8c8d6abb146-kube-api-access-v6b5b\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596340 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-client\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596455 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596966 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d68754d4-260b-460e-a34e-3d4a7313e4eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.597088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-dir\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.597137 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68754d4-260b-460e-a34e-3d4a7313e4eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.599112 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-dir\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.599534 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-serving-cert\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.601689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/056f1862-446a-4aa9-9a9f-f09463c32dab-metrics-tls\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.603749 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-encryption-config\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.607323 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.614429 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.634381 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.654236 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.674264 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.694049 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.706141 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4787cfd3-62d3-494b-94c9-e01ff459c73b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.714017 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.723723 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4787cfd3-62d3-494b-94c9-e01ff459c73b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.733555 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.753030 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.772548 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.788657 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.793095 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.814048 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.832818 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.854168 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.873628 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.894443 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.913820 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.934443 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.953685 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.974058 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.988964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68754d4-260b-460e-a34e-3d4a7313e4eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.993658 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.996363 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-serving-cert\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.996575 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.013810 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.018437 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68754d4-260b-460e-a34e-3d4a7313e4eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.034193 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.053646 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.074207 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.075005 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.078125 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-client\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.114600 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.133269 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.153670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.173215 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.194122 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.253210 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4949c\" (UniqueName: \"kubernetes.io/projected/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-kube-api-access-4949c\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.269028 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.292749 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjkst\" (UniqueName: \"kubernetes.io/projected/1c433a7c-ae2d-4320-b456-58b37bdd5f22-kube-api-access-sjkst\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.308619 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.313749 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.334663 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.345725 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.354189 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.373581 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.394131 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.411852 4949 request.go:700] Waited for 1.014910136s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.413630 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.434603 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.454819 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.457106 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.473552 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.494087 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.501945 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.510730 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.518135 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.535228 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.554940 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.569891 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r9kf7"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.575489 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.608974 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdg6v\" (UniqueName: \"kubernetes.io/projected/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-kube-api-access-gdg6v\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.614249 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.637297 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.661484 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.683149 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmgqx\" (UniqueName: \"kubernetes.io/projected/7f69495e-a17d-4493-b598-99c2fc9afee7-kube-api-access-nmgqx\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.689805 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqn7b\" (UniqueName: \"kubernetes.io/projected/cc07a381-955f-47a2-89ab-59985f08e602-kube-api-access-tqn7b\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.694980 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.694702 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.743125 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.743541 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 14:52:21 crc kubenswrapper[4949]: W0120 14:52:21.744421 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1e03d2d_d9d6_4cf8_9339_ec325b99453d.slice/crio-ff90c5ee5e38f531c48f7735d37fa8892c79a5efadb24237ca14916ed1bb0628 WatchSource:0}: Error finding container ff90c5ee5e38f531c48f7735d37fa8892c79a5efadb24237ca14916ed1bb0628: Status 404 returned error can't find the container with id ff90c5ee5e38f531c48f7735d37fa8892c79a5efadb24237ca14916ed1bb0628 Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.753361 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.773675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.774287 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.775178 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.788320 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.788336 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.788580 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.793726 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: W0120 14:52:21.795065 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6278caf6_b4d9_414c_99ed_686de2b23a80.slice/crio-7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70 WatchSource:0}: Error finding container 7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70: Status 404 returned error can't find the container with id 7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70 Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.813439 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.818015 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.833959 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.853938 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.874091 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.893426 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.894523 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.918434 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.939309 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.951358 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tsmsl"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.953878 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: W0120 14:52:21.963117 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f69495e_a17d_4493_b598_99c2fc9afee7.slice/crio-548ba1a0c76cc74a6f3b697dd70bba68370d5e62f316b04d328022149a43027e WatchSource:0}: Error finding container 548ba1a0c76cc74a6f3b697dd70bba68370d5e62f316b04d328022149a43027e: Status 404 returned error can't find the container with id 548ba1a0c76cc74a6f3b697dd70bba68370d5e62f316b04d328022149a43027e Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.976907 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.996013 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.015079 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.032917 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.054192 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.073466 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.077569 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4xbn"] Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.093580 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.113396 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.132992 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 14:52:22 crc kubenswrapper[4949]: W0120 14:52:22.139454 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc07a381_955f_47a2_89ab_59985f08e602.slice/crio-53df08ff968cf006c7c2fe0424ef60e8698e75f42ffc1de3f6ee4271ea5e2faf WatchSource:0}: Error finding container 53df08ff968cf006c7c2fe0424ef60e8698e75f42ffc1de3f6ee4271ea5e2faf: Status 404 returned error can't find the container with id 53df08ff968cf006c7c2fe0424ef60e8698e75f42ffc1de3f6ee4271ea5e2faf Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.153374 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.172842 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.192897 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.214256 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.233605 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.253922 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.274081 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.293426 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.313771 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.333914 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.353330 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.373376 4949 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.393915 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.413711 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.431993 4949 request.go:700] Waited for 1.838282479s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.452494 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.471191 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnvjf\" (UniqueName: \"kubernetes.io/projected/4787cfd3-62d3-494b-94c9-e01ff459c73b-kube-api-access-mnvjf\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.487762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8md\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-kube-api-access-rc8md\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.507584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6b5b\" (UniqueName: \"kubernetes.io/projected/113494fa-baf7-4f60-9a9c-e8c8d6abb146-kube-api-access-v6b5b\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.527345 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d68754d4-260b-460e-a34e-3d4a7313e4eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.535844 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" event={"ID":"cc07a381-955f-47a2-89ab-59985f08e602","Type":"ContainerStarted","Data":"29f094912f73a29db8ae50237a19e53739f670dd3fdc4b70fd4d6162582373d7"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.535906 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" event={"ID":"cc07a381-955f-47a2-89ab-59985f08e602","Type":"ContainerStarted","Data":"53df08ff968cf006c7c2fe0424ef60e8698e75f42ffc1de3f6ee4271ea5e2faf"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.537855 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" event={"ID":"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a","Type":"ContainerStarted","Data":"3cbb47949f3611a106c55ae3c09b569b29efc89c5be14377d0c73f8a9b8e6291"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.537881 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" event={"ID":"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a","Type":"ContainerStarted","Data":"cacaa10e06e80f06e91c6ed729042af0bf81d186d1f76f05404c95fc95246ac8"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.537891 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" event={"ID":"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a","Type":"ContainerStarted","Data":"1fe3b01b1d19669fae768fefd6ed9eb0085b09f78d7896defb976e88a14da08d"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.542073 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" event={"ID":"086b7727-a8b6-4416-a46e-60e4474e79e2","Type":"ContainerStarted","Data":"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.542144 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" event={"ID":"086b7727-a8b6-4416-a46e-60e4474e79e2","Type":"ContainerStarted","Data":"6cdd7178026b2587db50c95fe7c40688b8e05cd993d070aa0db4f3a3e9c38e1f"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.542377 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.543969 4949 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zc5vv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.543994 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" event={"ID":"1c433a7c-ae2d-4320-b456-58b37bdd5f22","Type":"ContainerDied","Data":"032d13aa9eab6efaa4e137f542fc4683fbdc6793665d1b5b8603afa609d985c6"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.544020 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.543960 4949 generic.go:334] "Generic (PLEG): container finished" podID="1c433a7c-ae2d-4320-b456-58b37bdd5f22" containerID="032d13aa9eab6efaa4e137f542fc4683fbdc6793665d1b5b8603afa609d985c6" exitCode=0 Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.544122 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" event={"ID":"1c433a7c-ae2d-4320-b456-58b37bdd5f22","Type":"ContainerStarted","Data":"0efa6db78e84cb5227720027d0f377bcda2d44ee51afedbcff4784bee253dd91"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.546483 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" event={"ID":"7f69495e-a17d-4493-b598-99c2fc9afee7","Type":"ContainerStarted","Data":"d7e19dd5252c402931923f8d46dd74c4df0cbd1f7c164cb2e90cd291ed391050"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.546847 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" event={"ID":"7f69495e-a17d-4493-b598-99c2fc9afee7","Type":"ContainerStarted","Data":"a4bb167e555bbaf94d569377a87cc211a2c59e6ad892f948644dfd910cf08394"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.546864 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" event={"ID":"7f69495e-a17d-4493-b598-99c2fc9afee7","Type":"ContainerStarted","Data":"548ba1a0c76cc74a6f3b697dd70bba68370d5e62f316b04d328022149a43027e"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.547762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.548411 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" event={"ID":"6278caf6-b4d9-414c-99ed-686de2b23a80","Type":"ContainerStarted","Data":"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.548456 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.548470 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" event={"ID":"6278caf6-b4d9-414c-99ed-686de2b23a80","Type":"ContainerStarted","Data":"7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.549855 4949 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fz5x4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.549902 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.550738 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" event={"ID":"b1e03d2d-d9d6-4cf8-9339-ec325b99453d","Type":"ContainerStarted","Data":"988323478fe4a63f7c97a04f3239dbdd83a78967a23ef021f317a613dcd00a7e"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.550780 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" event={"ID":"b1e03d2d-d9d6-4cf8-9339-ec325b99453d","Type":"ContainerStarted","Data":"ff90c5ee5e38f531c48f7735d37fa8892c79a5efadb24237ca14916ed1bb0628"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.553056 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.574138 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.575578 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.623039 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.634359 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652565 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tws9\" (UniqueName: \"kubernetes.io/projected/8516de03-2f1a-43e7-8af0-116378f96b8f-kube-api-access-7tws9\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652631 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgh7\" (UniqueName: \"kubernetes.io/projected/18aa9682-4716-4c4f-a53e-cc2f312c7c16-kube-api-access-jjgh7\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652662 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqdg\" (UniqueName: \"kubernetes.io/projected/182137c4-babb-4c69-b53d-d37131c3041a-kube-api-access-ksqdg\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652727 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8169cee8-7942-4c7f-92bd-f89e4b027b83-proxy-tls\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652761 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnxqp\" (UniqueName: \"kubernetes.io/projected/fe950de2-c48d-481b-a5fc-c943fe124904-kube-api-access-wnxqp\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652791 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-images\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652855 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-trusted-ca\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652885 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-metrics-certs\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652904 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-config\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwz2\" (UniqueName: \"kubernetes.io/projected/abb60fa1-1584-4837-890f-888754026b25-kube-api-access-vxwz2\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652959 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.653012 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654053 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-serving-cert\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654092 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/182137c4-babb-4c69-b53d-d37131c3041a-metrics-tls\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654114 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654172 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8516de03-2f1a-43e7-8af0-116378f96b8f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654195 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/abb60fa1-1584-4837-890f-888754026b25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654242 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654262 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-client\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654335 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe950de2-c48d-481b-a5fc-c943fe124904-service-ca-bundle\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654377 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654426 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzvg7\" (UniqueName: \"kubernetes.io/projected/e45c974f-4645-4895-9f73-cfd03e798e00-kube-api-access-nzvg7\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654448 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654488 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-config\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654592 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654627 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654676 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654789 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654829 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f451eb2-597d-47c6-aa10-66a79776f101-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654829 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654869 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654900 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18aa9682-4716-4c4f-a53e-cc2f312c7c16-serving-cert\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654934 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655046 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655161 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655236 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655413 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655444 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655610 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/abb60fa1-1584-4837-890f-888754026b25-proxy-tls\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655696 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655726 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7fm\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-kube-api-access-sq7fm\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655744 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655769 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-config\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655797 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655821 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f451eb2-597d-47c6-aa10-66a79776f101-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655848 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655868 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655911 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnm5\" (UniqueName: \"kubernetes.io/projected/8169cee8-7942-4c7f-92bd-f89e4b027b83-kube-api-access-ntnm5\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655977 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656029 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656051 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-stats-auth\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656073 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656105 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656128 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-725zb\" (UniqueName: \"kubernetes.io/projected/33ca7885-743f-48cd-b3ba-80f9a1f8cf85-kube-api-access-725zb\") pod \"downloads-7954f5f757-bb9s9\" (UID: \"33ca7885-743f-48cd-b3ba-80f9a1f8cf85\") " pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656213 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-service-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656236 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-default-certificate\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.656397 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.156386454 +0000 UTC m=+138.966217312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.678964 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.683436 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.694319 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.721616 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757214 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.757399 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.257378981 +0000 UTC m=+139.067209839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757430 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-srv-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757481 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-serving-cert\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757508 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/182137c4-babb-4c69-b53d-d37131c3041a-metrics-tls\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757555 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8516de03-2f1a-43e7-8af0-116378f96b8f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757579 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/abb60fa1-1584-4837-890f-888754026b25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757638 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-mountpoint-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757711 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757740 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-client\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757762 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe950de2-c48d-481b-a5fc-c943fe124904-service-ca-bundle\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757822 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757848 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzvg7\" (UniqueName: \"kubernetes.io/projected/e45c974f-4645-4895-9f73-cfd03e798e00-kube-api-access-nzvg7\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757871 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757894 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-config\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757992 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758017 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw6pz\" (UniqueName: \"kubernetes.io/projected/27518978-3cb4-4732-bc84-13abfa7e9c81-kube-api-access-lw6pz\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92sl\" (UniqueName: \"kubernetes.io/projected/1a0cc344-c778-44a2-a6f6-e2067286c347-kube-api-access-v92sl\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758074 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758097 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758122 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqsfd\" (UniqueName: \"kubernetes.io/projected/c47ecb6d-9ecf-480f-b605-4dd91e900521-kube-api-access-hqsfd\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758145 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f451eb2-597d-47c6-aa10-66a79776f101-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758183 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94z48\" (UniqueName: \"kubernetes.io/projected/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-kube-api-access-94z48\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758204 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27518978-3cb4-4732-bc84-13abfa7e9c81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758231 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-registration-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758257 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758282 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bc7\" (UniqueName: \"kubernetes.io/projected/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-kube-api-access-j8bc7\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758308 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18aa9682-4716-4c4f-a53e-cc2f312c7c16-serving-cert\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758364 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758391 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95c38c39-62f0-4343-9628-5070d8cc10b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758414 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9d1a76-4686-40ae-8b09-e66126088926-cert\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758434 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758469 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-tmpfs\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758497 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758611 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a47ded-8ed0-4c5c-8e53-2ff63413b679-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-plugins-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758733 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvpj7\" (UniqueName: \"kubernetes.io/projected/10228b44-2c32-4fab-a4f9-c703ef0b6b39-kube-api-access-cvpj7\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758761 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758786 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758825 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758850 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758891 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27518978-3cb4-4732-bc84-13abfa7e9c81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758915 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8351da-e624-4d42-be80-14e2c90c57f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758952 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/980ff476-0915-44c2-8665-41d9074e3763-signing-key\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759003 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759028 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-webhook-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759092 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-csi-data-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759113 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/980ff476-0915-44c2-8665-41d9074e3763-signing-cabundle\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759135 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/abb60fa1-1584-4837-890f-888754026b25-proxy-tls\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759159 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759182 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-apiservice-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759206 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq7fm\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-kube-api-access-sq7fm\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-config\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759276 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f451eb2-597d-47c6-aa10-66a79776f101-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759312 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-socket-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a0cc344-c778-44a2-a6f6-e2067286c347-config-volume\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759382 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759404 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759428 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759455 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntnm5\" (UniqueName: \"kubernetes.io/projected/8169cee8-7942-4c7f-92bd-f89e4b027b83-kube-api-access-ntnm5\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xq6s\" (UniqueName: \"kubernetes.io/projected/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-kube-api-access-9xq6s\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759509 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759575 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759601 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759623 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-stats-auth\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759646 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759671 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759754 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-725zb\" (UniqueName: \"kubernetes.io/projected/33ca7885-743f-48cd-b3ba-80f9a1f8cf85-kube-api-access-725zb\") pod \"downloads-7954f5f757-bb9s9\" (UID: \"33ca7885-743f-48cd-b3ba-80f9a1f8cf85\") " pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759780 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759804 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-node-bootstrap-token\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-service-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759874 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-default-certificate\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-config\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759934 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/980ff476-0915-44c2-8665-41d9074e3763-kube-api-access-k5jkz\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759962 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tws9\" (UniqueName: \"kubernetes.io/projected/8516de03-2f1a-43e7-8af0-116378f96b8f-kube-api-access-7tws9\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759985 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lmn\" (UniqueName: \"kubernetes.io/projected/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-kube-api-access-d5lmn\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8351da-e624-4d42-be80-14e2c90c57f4-config\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760057 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86q6\" (UniqueName: \"kubernetes.io/projected/3fae0085-f1fb-44ed-b871-0e6fe5072006-kube-api-access-f86q6\") pod \"migrator-59844c95c7-vr5fk\" (UID: \"3fae0085-f1fb-44ed-b871-0e6fe5072006\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760095 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjgh7\" (UniqueName: \"kubernetes.io/projected/18aa9682-4716-4c4f-a53e-cc2f312c7c16-kube-api-access-jjgh7\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760131 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760156 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksqdg\" (UniqueName: \"kubernetes.io/projected/182137c4-babb-4c69-b53d-d37131c3041a-kube-api-access-ksqdg\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760195 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hstf7\" (UniqueName: \"kubernetes.io/projected/95c38c39-62f0-4343-9628-5070d8cc10b7-kube-api-access-hstf7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760222 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-certs\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760246 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8169cee8-7942-4c7f-92bd-f89e4b027b83-proxy-tls\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqd92\" (UniqueName: \"kubernetes.io/projected/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-kube-api-access-bqd92\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760320 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnxqp\" (UniqueName: \"kubernetes.io/projected/fe950de2-c48d-481b-a5fc-c943fe124904-kube-api-access-wnxqp\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760347 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-images\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760409 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-trusted-ca\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760434 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-metrics-certs\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760488 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwz2\" (UniqueName: \"kubernetes.io/projected/abb60fa1-1584-4837-890f-888754026b25-kube-api-access-vxwz2\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762117 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762157 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-serving-cert\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762220 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-config\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762248 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95s74\" (UniqueName: \"kubernetes.io/projected/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-kube-api-access-95s74\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762272 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-srv-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762332 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762370 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0cc344-c778-44a2-a6f6-e2067286c347-metrics-tls\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.763262 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.765421 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/abb60fa1-1584-4837-890f-888754026b25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.765794 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.767539 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8516de03-2f1a-43e7-8af0-116378f96b8f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.767727 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.769218 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwhx\" (UniqueName: \"kubernetes.io/projected/8b9d1a76-4686-40ae-8b09-e66126088926-kube-api-access-9lwhx\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.769623 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-config\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.769732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-images\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.770044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-service-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.770061 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-trusted-ca\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.770795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe950de2-c48d-481b-a5fc-c943fe124904-service-ca-bundle\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.771504 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.773665 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f451eb2-597d-47c6-aa10-66a79776f101-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.774917 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.775928 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.775994 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rw4\" (UniqueName: \"kubernetes.io/projected/97a47ded-8ed0-4c5c-8e53-2ff63413b679-kube-api-access-l8rw4\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776021 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-config\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776032 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776131 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8351da-e624-4d42-be80-14e2c90c57f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776163 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c47ecb6d-9ecf-480f-b605-4dd91e900521-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776759 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.777627 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.778719 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.780283 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/abb60fa1-1584-4837-890f-888754026b25-proxy-tls\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.781080 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.782838 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.783255 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-config\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.783918 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.784815 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.785246 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.785333 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.285317744 +0000 UTC m=+139.095148792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.788945 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.801117 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.801321 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18aa9682-4716-4c4f-a53e-cc2f312c7c16-serving-cert\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.802767 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.802793 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.803229 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.825341 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/182137c4-babb-4c69-b53d-d37131c3041a-metrics-tls\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.825398 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.839842 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.839990 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-stats-auth\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.841042 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-default-certificate\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.841161 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.845293 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.846249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f451eb2-597d-47c6-aa10-66a79776f101-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.846886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.847322 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.847436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-client\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.847436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-serving-cert\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.847601 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.848460 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-metrics-certs\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.850743 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr"] Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.851258 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.851865 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnxqp\" (UniqueName: \"kubernetes.io/projected/fe950de2-c48d-481b-a5fc-c943fe124904-kube-api-access-wnxqp\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.852381 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.854249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwz2\" (UniqueName: \"kubernetes.io/projected/abb60fa1-1584-4837-890f-888754026b25-kube-api-access-vxwz2\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.866663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8169cee8-7942-4c7f-92bd-f89e4b027b83-proxy-tls\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.866982 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjgh7\" (UniqueName: \"kubernetes.io/projected/18aa9682-4716-4c4f-a53e-cc2f312c7c16-kube-api-access-jjgh7\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.878543 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.879851 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880201 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c47ecb6d-9ecf-480f-b605-4dd91e900521-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880237 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-srv-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-mountpoint-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880330 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6pz\" (UniqueName: \"kubernetes.io/projected/27518978-3cb4-4732-bc84-13abfa7e9c81-kube-api-access-lw6pz\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880407 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92sl\" (UniqueName: \"kubernetes.io/projected/1a0cc344-c778-44a2-a6f6-e2067286c347-kube-api-access-v92sl\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880432 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqsfd\" (UniqueName: \"kubernetes.io/projected/c47ecb6d-9ecf-480f-b605-4dd91e900521-kube-api-access-hqsfd\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880459 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94z48\" (UniqueName: \"kubernetes.io/projected/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-kube-api-access-94z48\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880476 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27518978-3cb4-4732-bc84-13abfa7e9c81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880496 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-registration-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880535 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bc7\" (UniqueName: \"kubernetes.io/projected/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-kube-api-access-j8bc7\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880558 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9d1a76-4686-40ae-8b09-e66126088926-cert\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880581 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880621 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95c38c39-62f0-4343-9628-5070d8cc10b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880649 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-tmpfs\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880669 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a47ded-8ed0-4c5c-8e53-2ff63413b679-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880691 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-plugins-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880718 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvpj7\" (UniqueName: \"kubernetes.io/projected/10228b44-2c32-4fab-a4f9-c703ef0b6b39-kube-api-access-cvpj7\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880757 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880774 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27518978-3cb4-4732-bc84-13abfa7e9c81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880794 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8351da-e624-4d42-be80-14e2c90c57f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880816 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/980ff476-0915-44c2-8665-41d9074e3763-signing-key\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880857 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-webhook-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-csi-data-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880896 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/980ff476-0915-44c2-8665-41d9074e3763-signing-cabundle\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-apiservice-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-socket-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880963 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a0cc344-c778-44a2-a6f6-e2067286c347-config-volume\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880989 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881010 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xq6s\" (UniqueName: \"kubernetes.io/projected/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-kube-api-access-9xq6s\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-node-bootstrap-token\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881110 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5lmn\" (UniqueName: \"kubernetes.io/projected/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-kube-api-access-d5lmn\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8351da-e624-4d42-be80-14e2c90c57f4-config\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881154 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-config\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881173 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/980ff476-0915-44c2-8665-41d9074e3763-kube-api-access-k5jkz\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881228 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f86q6\" (UniqueName: \"kubernetes.io/projected/3fae0085-f1fb-44ed-b871-0e6fe5072006-kube-api-access-f86q6\") pod \"migrator-59844c95c7-vr5fk\" (UID: \"3fae0085-f1fb-44ed-b871-0e6fe5072006\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881257 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hstf7\" (UniqueName: \"kubernetes.io/projected/95c38c39-62f0-4343-9628-5070d8cc10b7-kube-api-access-hstf7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881274 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-certs\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881305 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqd92\" (UniqueName: \"kubernetes.io/projected/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-kube-api-access-bqd92\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881337 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-serving-cert\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881364 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95s74\" (UniqueName: \"kubernetes.io/projected/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-kube-api-access-95s74\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881405 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-srv-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881423 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0cc344-c778-44a2-a6f6-e2067286c347-metrics-tls\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881440 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwhx\" (UniqueName: \"kubernetes.io/projected/8b9d1a76-4686-40ae-8b09-e66126088926-kube-api-access-9lwhx\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881465 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881485 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rw4\" (UniqueName: \"kubernetes.io/projected/97a47ded-8ed0-4c5c-8e53-2ff63413b679-kube-api-access-l8rw4\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881535 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8351da-e624-4d42-be80-14e2c90c57f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.882489 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.382460298 +0000 UTC m=+139.192291336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.884763 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.884830 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-mountpoint-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.885021 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-plugins-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.885674 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-srv-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.885754 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-registration-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.886779 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27518978-3cb4-4732-bc84-13abfa7e9c81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.887046 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.887225 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-csi-data-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.890744 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8351da-e624-4d42-be80-14e2c90c57f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.890929 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8351da-e624-4d42-be80-14e2c90c57f4-config\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.891080 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9d1a76-4686-40ae-8b09-e66126088926-cert\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.891469 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-tmpfs\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.891787 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a0cc344-c778-44a2-a6f6-e2067286c347-config-volume\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.892122 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-socket-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.892832 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-config\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.896149 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c47ecb6d-9ecf-480f-b605-4dd91e900521-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.904316 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-webhook-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.904936 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.905500 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.908646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/980ff476-0915-44c2-8665-41d9074e3763-signing-key\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.909470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-serving-cert\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.912932 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.912937 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.913722 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0cc344-c778-44a2-a6f6-e2067286c347-metrics-tls\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.914879 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95c38c39-62f0-4343-9628-5070d8cc10b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.915050 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.915745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksqdg\" (UniqueName: \"kubernetes.io/projected/182137c4-babb-4c69-b53d-d37131c3041a-kube-api-access-ksqdg\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.916268 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27518978-3cb4-4732-bc84-13abfa7e9c81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.916366 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a47ded-8ed0-4c5c-8e53-2ff63413b679-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.917398 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-node-bootstrap-token\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.929184 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/980ff476-0915-44c2-8665-41d9074e3763-signing-cabundle\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.930665 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-apiservice-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.931121 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-certs\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.931354 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-srv-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.937783 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.952177 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tws9\" (UniqueName: \"kubernetes.io/projected/8516de03-2f1a-43e7-8af0-116378f96b8f-kube-api-access-7tws9\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.953266 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq7fm\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-kube-api-access-sq7fm\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.970802 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.973266 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.978826 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.980733 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd"] Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.987622 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.988191 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.488168328 +0000 UTC m=+139.297999186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.992234 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.012084 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.028281 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.040566 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.042866 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.044887 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq"] Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.051690 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntnm5\" (UniqueName: \"kubernetes.io/projected/8169cee8-7942-4c7f-92bd-f89e4b027b83-kube-api-access-ntnm5\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.099205 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.099978 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.599954517 +0000 UTC m=+139.409785375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.112312 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.112673 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzvg7\" (UniqueName: \"kubernetes.io/projected/e45c974f-4645-4895-9f73-cfd03e798e00-kube-api-access-nzvg7\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:23 crc kubenswrapper[4949]: W0120 14:52:23.131822 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe950de2_c48d_481b_a5fc_c943fe124904.slice/crio-e43f3f41b732b58041804723b1968d401596dc22c11f71bd7f7cc5ece56711ed WatchSource:0}: Error finding container e43f3f41b732b58041804723b1968d401596dc22c11f71bd7f7cc5ece56711ed: Status 404 returned error can't find the container with id e43f3f41b732b58041804723b1968d401596dc22c11f71bd7f7cc5ece56711ed Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.138298 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-725zb\" (UniqueName: \"kubernetes.io/projected/33ca7885-743f-48cd-b3ba-80f9a1f8cf85-kube-api-access-725zb\") pod \"downloads-7954f5f757-bb9s9\" (UID: \"33ca7885-743f-48cd-b3ba-80f9a1f8cf85\") " pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.145934 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft"] Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.149998 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bc7\" (UniqueName: \"kubernetes.io/projected/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-kube-api-access-j8bc7\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.156708 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92sl\" (UniqueName: \"kubernetes.io/projected/1a0cc344-c778-44a2-a6f6-e2067286c347-kube-api-access-v92sl\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.161598 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.186960 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.197087 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqsfd\" (UniqueName: \"kubernetes.io/projected/c47ecb6d-9ecf-480f-b605-4dd91e900521-kube-api-access-hqsfd\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.198401 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94z48\" (UniqueName: \"kubernetes.io/projected/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-kube-api-access-94z48\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.201730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.202275 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.70225893 +0000 UTC m=+139.512089788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.205595 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.213001 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.217764 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6pz\" (UniqueName: \"kubernetes.io/projected/27518978-3cb4-4732-bc84-13abfa7e9c81-kube-api-access-lw6pz\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.231903 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.237807 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvpj7\" (UniqueName: \"kubernetes.io/projected/10228b44-2c32-4fab-a4f9-c703ef0b6b39-kube-api-access-cvpj7\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.237839 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.256108 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/980ff476-0915-44c2-8665-41d9074e3763-kube-api-access-k5jkz\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.262482 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.265046 4949 csr.go:261] certificate signing request csr-xfdr5 is approved, waiting to be issued Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.277046 4949 csr.go:257] certificate signing request csr-xfdr5 is issued Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.278397 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xq6s\" (UniqueName: \"kubernetes.io/projected/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-kube-api-access-9xq6s\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.296343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.303942 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.304343 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.804325174 +0000 UTC m=+139.614156032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.325123 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8351da-e624-4d42-be80-14e2c90c57f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.337488 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.347152 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f86q6\" (UniqueName: \"kubernetes.io/projected/3fae0085-f1fb-44ed-b871-0e6fe5072006-kube-api-access-f86q6\") pod \"migrator-59844c95c7-vr5fk\" (UID: \"3fae0085-f1fb-44ed-b871-0e6fe5072006\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.355854 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.361102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5lmn\" (UniqueName: \"kubernetes.io/projected/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-kube-api-access-d5lmn\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.366248 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.380785 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.383549 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.395257 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.403986 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.409355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95s74\" (UniqueName: \"kubernetes.io/projected/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-kube-api-access-95s74\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.419923 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.424024 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hstf7\" (UniqueName: \"kubernetes.io/projected/95c38c39-62f0-4343-9628-5070d8cc10b7-kube-api-access-hstf7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.430204 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.430250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqd92\" (UniqueName: \"kubernetes.io/projected/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-kube-api-access-bqd92\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.430629 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.431120 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.431662 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.931646829 +0000 UTC m=+139.741477687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.439896 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.445955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.453176 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwhx\" (UniqueName: \"kubernetes.io/projected/8b9d1a76-4686-40ae-8b09-e66126088926-kube-api-access-9lwhx\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.455481 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.463863 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rw4\" (UniqueName: \"kubernetes.io/projected/97a47ded-8ed0-4c5c-8e53-2ff63413b679-kube-api-access-l8rw4\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.478030 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.482734 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mlc47"] Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.485331 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.532217 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.532996 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.032973768 +0000 UTC m=+139.842804626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.610253 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" event={"ID":"4787cfd3-62d3-494b-94c9-e01ff459c73b","Type":"ContainerStarted","Data":"de8de3cd355a157b6f12ebaf21f381d323b906d872b3cd525050c22abe0d6fc9"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.614732 4949 generic.go:334] "Generic (PLEG): container finished" podID="113494fa-baf7-4f60-9a9c-e8c8d6abb146" containerID="6dd90ff4602a759742ef4b5a2644cd0cdfca0356f129ec4fe9db3642443bbb47" exitCode=0 Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.614918 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" event={"ID":"113494fa-baf7-4f60-9a9c-e8c8d6abb146","Type":"ContainerDied","Data":"6dd90ff4602a759742ef4b5a2644cd0cdfca0356f129ec4fe9db3642443bbb47"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.614975 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" event={"ID":"113494fa-baf7-4f60-9a9c-e8c8d6abb146","Type":"ContainerStarted","Data":"f8927f80d6ab352d739f767e829fde753ee7ce0e9e4f6d680a1d142bff2c6486"} Jan 20 14:52:23 crc kubenswrapper[4949]: W0120 14:52:23.617141 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45bacc20_7998_4250_bbd3_fd1d24741ea7.slice/crio-d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f WatchSource:0}: Error finding container d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f: Status 404 returned error can't find the container with id d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.619876 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" event={"ID":"1c433a7c-ae2d-4320-b456-58b37bdd5f22","Type":"ContainerStarted","Data":"89ef8c049a23525f1cf29b7a9fc2a7b7e5865ef9e7c772deacf73e7e50d98951"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.635166 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.635434 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.135421286 +0000 UTC m=+139.945252144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.641041 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kncwj" event={"ID":"fe950de2-c48d-481b-a5fc-c943fe124904","Type":"ContainerStarted","Data":"e43f3f41b732b58041804723b1968d401596dc22c11f71bd7f7cc5ece56711ed"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.650255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" event={"ID":"056f1862-446a-4aa9-9a9f-f09463c32dab","Type":"ContainerStarted","Data":"34db3022f9b412d703645b40f9bf19b7323168b6725466186d6c6ed041a5565e"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.650329 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" event={"ID":"056f1862-446a-4aa9-9a9f-f09463c32dab","Type":"ContainerStarted","Data":"5c7c895fc6170ae616c1d3a919cf726add0d80c0bc182f19860276579ad4027c"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.657255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" event={"ID":"d68754d4-260b-460e-a34e-3d4a7313e4eb","Type":"ContainerStarted","Data":"e8b3ba99b2adcb5fddecf3546d6e23c4fb88d6189319ccaed8fb80f7d25879d3"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.658270 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.666591 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" event={"ID":"25a072c1-c9a6-4a14-9eee-81f3f967503b","Type":"ContainerStarted","Data":"37fb91e24d9502fca7001a77a1082aa104b29a70445d3ced18d4a89d50594cce"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.688439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.697709 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.737331 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.738172 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.238153333 +0000 UTC m=+140.047984191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.839916 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.840945 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk"] Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.841935 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.341920276 +0000 UTC m=+140.151751124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.842985 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.843146 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.942151 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.942678 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.442649145 +0000 UTC m=+140.252480003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: W0120 14:52:24.027123 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb60fa1_1584_4837_890f_888754026b25.slice/crio-3cefe03f6ab84d7b1f69aec95a5eab9e707cfe9a7c1ad3ef0e8833707dce591b WatchSource:0}: Error finding container 3cefe03f6ab84d7b1f69aec95a5eab9e707cfe9a7c1ad3ef0e8833707dce591b: Status 404 returned error can't find the container with id 3cefe03f6ab84d7b1f69aec95a5eab9e707cfe9a7c1ad3ef0e8833707dce591b Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.048233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.048875 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.548831401 +0000 UTC m=+140.358662259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.149462 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.149936 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.649918782 +0000 UTC m=+140.459749630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.251320 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.251695 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.751677086 +0000 UTC m=+140.561507944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.278075 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-20 14:47:23 +0000 UTC, rotation deadline is 2026-12-01 19:16:32.592785638 +0000 UTC Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.278148 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7564h24m8.31464197s for next certificate rotation Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.352057 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.352782 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.852746746 +0000 UTC m=+140.662577604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.352938 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.353279 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.853267314 +0000 UTC m=+140.663098172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.456093 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.456417 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.956392175 +0000 UTC m=+140.766223023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.475228 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" podStartSLOduration=119.475214773 podStartE2EDuration="1m59.475214773s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.473878768 +0000 UTC m=+140.283709616" watchObservedRunningTime="2026-01-20 14:52:24.475214773 +0000 UTC m=+140.285045631" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.523474 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" podStartSLOduration=118.523458255 podStartE2EDuration="1m58.523458255s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.522989138 +0000 UTC m=+140.332819996" watchObservedRunningTime="2026-01-20 14:52:24.523458255 +0000 UTC m=+140.333289113" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.560293 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.560906 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.060891234 +0000 UTC m=+140.870722092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.589553 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" podStartSLOduration=119.58953572 podStartE2EDuration="1m59.58953572s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.587602103 +0000 UTC m=+140.397432961" watchObservedRunningTime="2026-01-20 14:52:24.58953572 +0000 UTC m=+140.399366578" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.661701 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.661801 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.161776017 +0000 UTC m=+140.971606865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.662595 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.663076 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.163063562 +0000 UTC m=+140.972894420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.715317 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" event={"ID":"abb60fa1-1584-4837-890f-888754026b25","Type":"ContainerStarted","Data":"313b85b5ccfbfed37d1a387f50fba0506960708f5ba0dc437a92da338c5cdcb4"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.715373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" event={"ID":"abb60fa1-1584-4837-890f-888754026b25","Type":"ContainerStarted","Data":"3cefe03f6ab84d7b1f69aec95a5eab9e707cfe9a7c1ad3ef0e8833707dce591b"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.743779 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mlc47" event={"ID":"18aa9682-4716-4c4f-a53e-cc2f312c7c16","Type":"ContainerStarted","Data":"103c82e6c589c90e8fe9b44bc61894c75a4c6cee3c97bd9e018892e90099c9b0"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.743830 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mlc47" event={"ID":"18aa9682-4716-4c4f-a53e-cc2f312c7c16","Type":"ContainerStarted","Data":"1b6dcf23e704d43129b2db26c4cf71de5e420dff45438308b5dad9c933b766fe"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.746976 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.754753 4949 patch_prober.go:28] interesting pod/console-operator-58897d9998-mlc47 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.754835 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mlc47" podUID="18aa9682-4716-4c4f-a53e-cc2f312c7c16" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.756588 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8mlj4" event={"ID":"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612","Type":"ContainerStarted","Data":"274f6429d9c5e4de1a3f70212f4fa396a3f9e051f079899d894b87f554cde5b9"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.756646 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8mlj4" event={"ID":"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612","Type":"ContainerStarted","Data":"2ef016eb38eac852c6d5d9d0475d714eb7957dc2f9f4690b1ec8b1137fb0c550"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.763290 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.763758 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.263741338 +0000 UTC m=+141.073572196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.798852 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" podStartSLOduration=119.798833317 podStartE2EDuration="1m59.798833317s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.794157296 +0000 UTC m=+140.603988144" watchObservedRunningTime="2026-01-20 14:52:24.798833317 +0000 UTC m=+140.608664175" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.840685 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" event={"ID":"1c433a7c-ae2d-4320-b456-58b37bdd5f22","Type":"ContainerStarted","Data":"3a3927b0fda226b916d27dde6f796567a78a6aa8b521909f2498a5f9dfe07b43"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.840740 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kncwj" event={"ID":"fe950de2-c48d-481b-a5fc-c943fe124904","Type":"ContainerStarted","Data":"64eedbd12fdc30e368c677be1cc0d7773d4eadd64d6ba8314af72550c4ede168"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.844932 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" event={"ID":"056f1862-446a-4aa9-9a9f-f09463c32dab","Type":"ContainerStarted","Data":"2afde7b0c1485178ee0e392a0a09c156aed7d4bca491bf9dcf9c5f07f63fa01d"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.852758 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" event={"ID":"d68754d4-260b-460e-a34e-3d4a7313e4eb","Type":"ContainerStarted","Data":"57d3ab7a6f55a7babed716d2ff54de05a138792ed7b9c74742a6f900f0326e85"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.854248 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" event={"ID":"25a072c1-c9a6-4a14-9eee-81f3f967503b","Type":"ContainerStarted","Data":"7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.854778 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.855460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" event={"ID":"4787cfd3-62d3-494b-94c9-e01ff459c73b","Type":"ContainerStarted","Data":"a5947e5bffea3a9e57650043923778e63b2387b31984bac0d10ed9bfde92bcb6"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.857169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" event={"ID":"45bacc20-7998-4250-bbd3-fd1d24741ea7","Type":"ContainerStarted","Data":"d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.859543 4949 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ntmdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.859586 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.864802 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.865054 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.365043137 +0000 UTC m=+141.174873995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.869026 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" podStartSLOduration=118.869005193 podStartE2EDuration="1m58.869005193s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.841021569 +0000 UTC m=+140.650852437" watchObservedRunningTime="2026-01-20 14:52:24.869005193 +0000 UTC m=+140.678836051" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.967556 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.968889 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.468869152 +0000 UTC m=+141.278700010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.996754 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.072006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.072295 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.572284613 +0000 UTC m=+141.382115471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.116492 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" podStartSLOduration=119.116479525 podStartE2EDuration="1m59.116479525s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.112980644 +0000 UTC m=+140.922811502" watchObservedRunningTime="2026-01-20 14:52:25.116479525 +0000 UTC m=+140.926310383" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.172825 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.172987 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.67296166 +0000 UTC m=+141.482792528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.173063 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.173426 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.673412475 +0000 UTC m=+141.483243333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.273891 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.274240 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.774225237 +0000 UTC m=+141.584056095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.276529 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:25 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:25 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:25 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.276578 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.376913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.377351 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.877336327 +0000 UTC m=+141.687167205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.392475 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" podStartSLOduration=119.392446247 podStartE2EDuration="1m59.392446247s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.390237371 +0000 UTC m=+141.200068239" watchObservedRunningTime="2026-01-20 14:52:25.392446247 +0000 UTC m=+141.202277105" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.405314 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mlc47" podStartSLOduration=119.40529314 podStartE2EDuration="1m59.40529314s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.337883468 +0000 UTC m=+141.147714326" watchObservedRunningTime="2026-01-20 14:52:25.40529314 +0000 UTC m=+141.215123998" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.442224 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" podStartSLOduration=120.442206261 podStartE2EDuration="2m0.442206261s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.434977631 +0000 UTC m=+141.244808489" watchObservedRunningTime="2026-01-20 14:52:25.442206261 +0000 UTC m=+141.252037119" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.479864 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.480346 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.980325163 +0000 UTC m=+141.790156021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.481934 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" podStartSLOduration=119.481922219 podStartE2EDuration="1m59.481922219s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.480763158 +0000 UTC m=+141.290594036" watchObservedRunningTime="2026-01-20 14:52:25.481922219 +0000 UTC m=+141.291753077" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.540090 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kncwj" podStartSLOduration=119.540070451 podStartE2EDuration="1m59.540070451s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.536982384 +0000 UTC m=+141.346813242" watchObservedRunningTime="2026-01-20 14:52:25.540070451 +0000 UTC m=+141.349901299" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.551805 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r9dfg"] Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.569910 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8mlj4" podStartSLOduration=5.569892337 podStartE2EDuration="5.569892337s" podCreationTimestamp="2026-01-20 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.56501831 +0000 UTC m=+141.374849168" watchObservedRunningTime="2026-01-20 14:52:25.569892337 +0000 UTC m=+141.379723195" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.581416 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.581732 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.081718395 +0000 UTC m=+141.891549253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.597545 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j8fgh"] Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.640553 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" podStartSLOduration=119.64053552 podStartE2EDuration="1m59.64053552s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.632369369 +0000 UTC m=+141.442200227" watchObservedRunningTime="2026-01-20 14:52:25.64053552 +0000 UTC m=+141.450366378" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.641647 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.672685 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" podStartSLOduration=119.672667006 podStartE2EDuration="1m59.672667006s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.669307051 +0000 UTC m=+141.479137909" watchObservedRunningTime="2026-01-20 14:52:25.672667006 +0000 UTC m=+141.482497864" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.682001 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.682320 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.182301298 +0000 UTC m=+141.992132156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.690849 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t"] Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.720733 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" podStartSLOduration=119.720712541 podStartE2EDuration="1m59.720712541s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.713927597 +0000 UTC m=+141.523758455" watchObservedRunningTime="2026-01-20 14:52:25.720712541 +0000 UTC m=+141.530543409" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.774167 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" podStartSLOduration=120.77414079 podStartE2EDuration="2m0.77414079s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.765981719 +0000 UTC m=+141.575812607" watchObservedRunningTime="2026-01-20 14:52:25.77414079 +0000 UTC m=+141.583971648" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.784723 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.785105 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.285090937 +0000 UTC m=+142.094921805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.863830 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" event={"ID":"113494fa-baf7-4f60-9a9c-e8c8d6abb146","Type":"ContainerStarted","Data":"f6284d9327fb4257f82064cb3518053de8a65a539d39a7ce3f622cb7176b100e"} Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.865452 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" event={"ID":"abb60fa1-1584-4837-890f-888754026b25","Type":"ContainerStarted","Data":"cff4dab721946a8596cca4daa01c28fbee97b92700c0fabbaf4d44f4ae341b63"} Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.869869 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" event={"ID":"45bacc20-7998-4250-bbd3-fd1d24741ea7","Type":"ContainerStarted","Data":"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a"} Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.871548 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.873887 4949 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-brlp7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.873959 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.883313 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w9d9r" event={"ID":"37539dae-2103-4b6c-871c-48b0c35a1850","Type":"ContainerStarted","Data":"f4877eaf97bd7c4d0e52e4fddc8cae7a451b37b3fd251230d5ececd8dac1c70e"} Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.895587 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.896081 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.396061819 +0000 UTC m=+142.205892677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.957261 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" event={"ID":"182137c4-babb-4c69-b53d-d37131c3041a","Type":"ContainerStarted","Data":"042cff9994f5feb00d67ab5d027011d818eb7cfce28a6d3bf82bc68002753c5a"} Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.000794 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.001155 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.501139587 +0000 UTC m=+142.310970445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.058062 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j8fgh" event={"ID":"1a0cc344-c778-44a2-a6f6-e2067286c347","Type":"ContainerStarted","Data":"ff8e053dc01feb3ca00a6ffb9d8b998ddf5e18cc4ab24b633e4615219c1f20cf"} Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.058205 4949 patch_prober.go:28] interesting pod/console-operator-58897d9998-mlc47 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.058291 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mlc47" podUID="18aa9682-4716-4c4f-a53e-cc2f312c7c16" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.059168 4949 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ntmdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.059216 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.071201 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:26 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:26 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:26 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.072036 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.092165 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" podStartSLOduration=120.092130691 podStartE2EDuration="2m0.092130691s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:26.057797938 +0000 UTC m=+141.867628796" watchObservedRunningTime="2026-01-20 14:52:26.092130691 +0000 UTC m=+141.901961549" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.102584 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.104895 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.604868149 +0000 UTC m=+142.414698997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.104964 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qhn47"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.152808 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.152854 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r"] Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.170536 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b89af20_11f6_4e88_8b1c_5e5ff5b47a70.slice/crio-dd9470a378ccf8e8aa35d7b21aa0d89ae0daa6b2ac24f2a4f3a783909f831012 WatchSource:0}: Error finding container dd9470a378ccf8e8aa35d7b21aa0d89ae0daa6b2ac24f2a4f3a783909f831012: Status 404 returned error can't find the container with id dd9470a378ccf8e8aa35d7b21aa0d89ae0daa6b2ac24f2a4f3a783909f831012 Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.182081 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.199329 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.205745 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.211179 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.212020 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8"] Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.212867 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.712846456 +0000 UTC m=+142.522677514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.220574 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bb9s9"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.307034 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.307313 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.80729905 +0000 UTC m=+142.617129908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.346799 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.346853 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.408312 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.408895 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.908868366 +0000 UTC m=+142.718699224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.509732 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.509919 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.009886965 +0000 UTC m=+142.819717823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.510136 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.510466 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.010455985 +0000 UTC m=+142.820286843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.548886 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd"] Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.559334 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c06ab34_4b4e_4047_b32d_e9d36c792b1d.slice/crio-f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8 WatchSource:0}: Error finding container f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8: Status 404 returned error can't find the container with id f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8 Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.569499 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.585694 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pcdvd"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.589591 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m8sd9"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.613076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.613376 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.113361748 +0000 UTC m=+142.923192606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.620655 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.625389 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.625446 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49lg4"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.633991 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zvfr4"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.643434 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.653252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lvqj5"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.664015 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.673728 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-27qdj"] Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.679818 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b9d1a76_4686_40ae_8b09_e66126088926.slice/crio-9aef443e20b57551f374efeba0fa40dd4bd9cf4141ae3dacb65c105449fdc8bf WatchSource:0}: Error finding container 9aef443e20b57551f374efeba0fa40dd4bd9cf4141ae3dacb65c105449fdc8bf: Status 404 returned error can't find the container with id 9aef443e20b57551f374efeba0fa40dd4bd9cf4141ae3dacb65c105449fdc8bf Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.685737 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod541edc44_7cd7_4c73_a5eb_48e2f5fd69b3.slice/crio-e2566b94316dde6fdc7bd6e4758b6125793d2ddc3884b9266b52fb3465d2aed1 WatchSource:0}: Error finding container e2566b94316dde6fdc7bd6e4758b6125793d2ddc3884b9266b52fb3465d2aed1: Status 404 returned error can't find the container with id e2566b94316dde6fdc7bd6e4758b6125793d2ddc3884b9266b52fb3465d2aed1 Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.693028 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf794ef9_4dc6_4b8b_a2ff_caad4a9ef6c8.slice/crio-5050cfcfef85e748d81a2e11d1d9dedb3e1e4d2f47d86642af3963d2e5fe8a25 WatchSource:0}: Error finding container 5050cfcfef85e748d81a2e11d1d9dedb3e1e4d2f47d86642af3963d2e5fe8a25: Status 404 returned error can't find the container with id 5050cfcfef85e748d81a2e11d1d9dedb3e1e4d2f47d86642af3963d2e5fe8a25 Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.695305 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27518978_3cb4_4732_bc84_13abfa7e9c81.slice/crio-c8978262403da17d486b1d9a46e60fa5f554b1764de138efba9b37f82ea99b72 WatchSource:0}: Error finding container c8978262403da17d486b1d9a46e60fa5f554b1764de138efba9b37f82ea99b72: Status 404 returned error can't find the container with id c8978262403da17d486b1d9a46e60fa5f554b1764de138efba9b37f82ea99b72 Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.717063 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod980ff476_0915_44c2_8665_41d9074e3763.slice/crio-a1a8496e32fca4c4bafa8f47cdbef6bf8fbb1c387899a0f588ffbb3fcfa2d82e WatchSource:0}: Error finding container a1a8496e32fca4c4bafa8f47cdbef6bf8fbb1c387899a0f588ffbb3fcfa2d82e: Status 404 returned error can't find the container with id a1a8496e32fca4c4bafa8f47cdbef6bf8fbb1c387899a0f588ffbb3fcfa2d82e Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.717628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.718178 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.218154117 +0000 UTC m=+143.027985185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.820035 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.820532 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.32049851 +0000 UTC m=+143.130329378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.924435 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.924870 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.424856944 +0000 UTC m=+143.234687802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.000831 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:27 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:27 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:27 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.000904 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.026297 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.026742 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.526722932 +0000 UTC m=+143.336553780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.129181 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.129906 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.629887104 +0000 UTC m=+143.439717962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.143066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pcdvd" event={"ID":"8b9d1a76-4686-40ae-8b09-e66126088926","Type":"ContainerStarted","Data":"9aef443e20b57551f374efeba0fa40dd4bd9cf4141ae3dacb65c105449fdc8bf"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.149676 4949 patch_prober.go:28] interesting pod/apiserver-76f77b778f-r9kf7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]log ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]etcd ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/max-in-flight-filter ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 20 14:52:27 crc kubenswrapper[4949]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 20 14:52:27 crc kubenswrapper[4949]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/project.openshift.io-projectcache ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/openshift.io-startinformers ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 14:52:27 crc kubenswrapper[4949]: livez check failed Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.149746 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" podUID="1c433a7c-ae2d-4320-b456-58b37bdd5f22" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.155997 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.156227 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.164835 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" event={"ID":"980ff476-0915-44c2-8665-41d9074e3763","Type":"ContainerStarted","Data":"a1a8496e32fca4c4bafa8f47cdbef6bf8fbb1c387899a0f588ffbb3fcfa2d82e"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.171471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" event={"ID":"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70","Type":"ContainerStarted","Data":"66e9827fb3bb82c584bc32f05e0f8f700c6b1501183f3f87fa681461047289a8"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.171539 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" event={"ID":"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70","Type":"ContainerStarted","Data":"dd9470a378ccf8e8aa35d7b21aa0d89ae0daa6b2ac24f2a4f3a783909f831012"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.172037 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.183405 4949 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qw6xk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.183501 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" podUID="0b89af20-11f6-4e88-8b1c-5e5ff5b47a70" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.184244 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" event={"ID":"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f","Type":"ContainerStarted","Data":"ac59b7209bc06c22266fbcfb399a323980c8358ea6ee6aa0281732b4df79dc93"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.184275 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" event={"ID":"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f","Type":"ContainerStarted","Data":"2f1015ed4b727aa9d18ec666b0d9f97e94b1811fa98c890bd182b37558631aa1"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.196682 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" event={"ID":"95c38c39-62f0-4343-9628-5070d8cc10b7","Type":"ContainerStarted","Data":"424e1b64a524a47e51b2ba62e9505534b9a06aa01dac67e538cef3ad47d64694"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.216790 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" podStartSLOduration=121.216771886 podStartE2EDuration="2m1.216771886s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:27.212984895 +0000 UTC m=+143.022815753" watchObservedRunningTime="2026-01-20 14:52:27.216771886 +0000 UTC m=+143.026602734" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.217544 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" event={"ID":"10228b44-2c32-4fab-a4f9-c703ef0b6b39","Type":"ContainerStarted","Data":"081f2e4003d8f9201e7929bd8bcde0960e39c5b5577548930dc3c31ada7d254b"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.227139 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" event={"ID":"8516de03-2f1a-43e7-8af0-116378f96b8f","Type":"ContainerStarted","Data":"aea15de4ff2d58551df56dd45a7919d7993fdc247e8dd58aaaaca249023ddcbf"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.231377 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.231710 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.731696331 +0000 UTC m=+143.541527189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.238353 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" event={"ID":"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8","Type":"ContainerStarted","Data":"5050cfcfef85e748d81a2e11d1d9dedb3e1e4d2f47d86642af3963d2e5fe8a25"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.251625 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"e2566b94316dde6fdc7bd6e4758b6125793d2ddc3884b9266b52fb3465d2aed1"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.269668 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" event={"ID":"182137c4-babb-4c69-b53d-d37131c3041a","Type":"ContainerStarted","Data":"03f236ec2ada28f4035f0b8b46c58d61ddedd828ed34153b3fa96773db1d662a"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.272253 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" event={"ID":"97a47ded-8ed0-4c5c-8e53-2ff63413b679","Type":"ContainerStarted","Data":"b71802b4a268a551cf59c2f1ff7546ae609bb6b755f54e3afaa3fe1e8a7120bf"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.279676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j8fgh" event={"ID":"1a0cc344-c778-44a2-a6f6-e2067286c347","Type":"ContainerStarted","Data":"b253c8fc26bc2b66b2b01d1ac645f2d2adcf1e7f951e6927b9b103f64c77c4c6"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.284053 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bb9s9" event={"ID":"33ca7885-743f-48cd-b3ba-80f9a1f8cf85","Type":"ContainerStarted","Data":"430500d9817934d094dce88f573d7e9fd6f3d6e1e91165afa5647cb2aeac43f0"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.286472 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" event={"ID":"8169cee8-7942-4c7f-92bd-f89e4b027b83","Type":"ContainerStarted","Data":"cd3f3009560b5f0244c6c09fa458917f6d016b986988143f37d5ba45f8141049"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.290970 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" event={"ID":"c47ecb6d-9ecf-480f-b605-4dd91e900521","Type":"ContainerStarted","Data":"dd7ad27a82b648cd382e54d39f4197d808a07dc272fcc81c41f01a5bbb73db24"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.295652 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" event={"ID":"27518978-3cb4-4732-bc84-13abfa7e9c81","Type":"ContainerStarted","Data":"c8978262403da17d486b1d9a46e60fa5f554b1764de138efba9b37f82ea99b72"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.298787 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" event={"ID":"e45c974f-4645-4895-9f73-cfd03e798e00","Type":"ContainerStarted","Data":"564ed981cd69a7c90c77e596e94cb852780bd48591ae11b510942a99316f069d"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.309064 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" event={"ID":"dd8570b5-67a9-4655-bc3e-c36bb6d5c646","Type":"ContainerStarted","Data":"7de177c8c08c4306419bc51015d9456fa67a418318f6ba5d6e6d89e0c979e401"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.312132 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" event={"ID":"4f451eb2-597d-47c6-aa10-66a79776f101","Type":"ContainerStarted","Data":"df2fc0067953b8dd35a8893742174b90514642cc8b0bf6c340d9c8b03301857c"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.313806 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" event={"ID":"8c06ab34-4b4e-4047-b32d-e9d36c792b1d","Type":"ContainerStarted","Data":"f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.315638 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" event={"ID":"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0","Type":"ContainerStarted","Data":"1ce55b3def8fd7237e29b69d58e832fa5d1e9211aa835135d090d3e30ca5e952"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.333340 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.334497 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.83448162 +0000 UTC m=+143.644312478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.341626 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" event={"ID":"8b8351da-e624-4d42-be80-14e2c90c57f4","Type":"ContainerStarted","Data":"a4bbe073d535d29b5009e40705559141ed90ca4e136177e8ba291c05acba6004"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.348129 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" event={"ID":"3fae0085-f1fb-44ed-b871-0e6fe5072006","Type":"ContainerStarted","Data":"d6dbc3461a89c8d9ee6144afd3d08e6e1240b6ab84b323f030a5dd9b98d36518"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.356583 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w9d9r" event={"ID":"37539dae-2103-4b6c-871c-48b0c35a1850","Type":"ContainerStarted","Data":"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.360794 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" podStartSLOduration=121.360780104 podStartE2EDuration="2m1.360780104s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:27.360351381 +0000 UTC m=+143.170182239" watchObservedRunningTime="2026-01-20 14:52:27.360780104 +0000 UTC m=+143.170610962" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.436479 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.436965 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.936945187 +0000 UTC m=+143.746776045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.538851 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.543677 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.043651802 +0000 UTC m=+143.853482660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.575058 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.576685 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.588295 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.614790 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-w9d9r" podStartSLOduration=121.614769641 podStartE2EDuration="2m1.614769641s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:27.386836992 +0000 UTC m=+143.196667840" watchObservedRunningTime="2026-01-20 14:52:27.614769641 +0000 UTC m=+143.424600499" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.640243 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.640561 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.140503756 +0000 UTC m=+143.950334614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.641116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.642425 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.142414023 +0000 UTC m=+143.952244941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.668545 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.743990 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.744085 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.244071753 +0000 UTC m=+144.053902611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.744342 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.744622 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.244616062 +0000 UTC m=+144.054446920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.844796 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.845004 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.344977977 +0000 UTC m=+144.154808845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.845088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.845365 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.34535401 +0000 UTC m=+144.155184868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.946768 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.947292 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.44725364 +0000 UTC m=+144.257084498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.947457 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.947879 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.447872211 +0000 UTC m=+144.257703069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.001620 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:28 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:28 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:28 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.001731 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.048270 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.048444 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.548418573 +0000 UTC m=+144.358249431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.048595 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.048937 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.548928011 +0000 UTC m=+144.358758869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.149381 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.149541 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.649503024 +0000 UTC m=+144.459333882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.149628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.149907 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.649900648 +0000 UTC m=+144.459731496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.250996 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.251190 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.751161564 +0000 UTC m=+144.560992422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.251464 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.251792 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.751779925 +0000 UTC m=+144.561610783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.352278 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.352491 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.852463332 +0000 UTC m=+144.662294260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.352603 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.352896 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.852884947 +0000 UTC m=+144.662715805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.363240 4949 generic.go:334] "Generic (PLEG): container finished" podID="01bfc821-a8ed-4dbd-a5b1-fa6659a6499f" containerID="ac59b7209bc06c22266fbcfb399a323980c8358ea6ee6aa0281732b4df79dc93" exitCode=0 Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.363285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" event={"ID":"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f","Type":"ContainerDied","Data":"ac59b7209bc06c22266fbcfb399a323980c8358ea6ee6aa0281732b4df79dc93"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.364406 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" event={"ID":"8c06ab34-4b4e-4047-b32d-e9d36c792b1d","Type":"ContainerStarted","Data":"f7ccf61b1b533eee3af51392be86e3fc038d228c29c868fd9df44638391dd3bf"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.365299 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" event={"ID":"3fae0085-f1fb-44ed-b871-0e6fe5072006","Type":"ContainerStarted","Data":"e167b7406760722ef8a3145bad0c131e20ff501075140a3dffa9b6c8f24160f0"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.366844 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" event={"ID":"27518978-3cb4-4732-bc84-13abfa7e9c81","Type":"ContainerStarted","Data":"6e637e7527d8d56982e8c05a19643930ca6ce823128b30710e128f64e5621d73"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.370096 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" event={"ID":"dd8570b5-67a9-4655-bc3e-c36bb6d5c646","Type":"ContainerStarted","Data":"cde8ce65ee3d95e1e2f3fe2c8ba0eef3cf5a5e5043feffee539b66d015a35cd3"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.372398 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" event={"ID":"4f451eb2-597d-47c6-aa10-66a79776f101","Type":"ContainerStarted","Data":"c97bfe2c8e22701445e6f3a42cd869302d25270da2734d343fe48bb6f3efbc63"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.378449 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" event={"ID":"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0","Type":"ContainerStarted","Data":"e4259156daa53f7e6836156dae598788fb2ca291db48d1d430689acae0308801"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.379576 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.381834 4949 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mtrqm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.381871 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" podUID="ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.382205 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" event={"ID":"c47ecb6d-9ecf-480f-b605-4dd91e900521","Type":"ContainerStarted","Data":"4a0808c77baeac9aa271bd8d77baa686a1e07d75220138ea25f508db2bf9f36a"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.383489 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" event={"ID":"95c38c39-62f0-4343-9628-5070d8cc10b7","Type":"ContainerStarted","Data":"769db8d1fefe7b3b7dea2f892965a83fad2f71bab86e99631546c605de94eabb"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.393488 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pcdvd" event={"ID":"8b9d1a76-4686-40ae-8b09-e66126088926","Type":"ContainerStarted","Data":"248d60b18e845320dde8578674bcefc230ab12222172e5999e2351618aa377fc"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.398460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" event={"ID":"97a47ded-8ed0-4c5c-8e53-2ff63413b679","Type":"ContainerStarted","Data":"7d4a06039a6cf2ee96d82d36197e2c112b483911a8f255e4243ee78937cbfa31"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.400465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" event={"ID":"10228b44-2c32-4fab-a4f9-c703ef0b6b39","Type":"ContainerStarted","Data":"5c0b34b53d600ac0058ee2cc3d7e7e252c0aec4eec10d97975c14049813246b9"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.401093 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.401901 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" event={"ID":"8b8351da-e624-4d42-be80-14e2c90c57f4","Type":"ContainerStarted","Data":"b123d80c9c294cebda319fb19dd5f8c84f5f8bd2e873afb724bc1cab5331e3f6"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.403832 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" event={"ID":"980ff476-0915-44c2-8665-41d9074e3763","Type":"ContainerStarted","Data":"5538309a571039b1ab8b68f878dfd217d02dc20f8d0cb090233a377f56e23b84"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.404131 4949 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xxm4k container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.404187 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" podUID="10228b44-2c32-4fab-a4f9-c703ef0b6b39" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.407945 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bb9s9" event={"ID":"33ca7885-743f-48cd-b3ba-80f9a1f8cf85","Type":"ContainerStarted","Data":"fa8d03737761ebb7770d3876c0c9250a9dba142993f1bbae8225ae59f81887b8"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.408119 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.408552 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" podStartSLOduration=122.408510582 podStartE2EDuration="2m2.408510582s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.406944729 +0000 UTC m=+144.216775587" watchObservedRunningTime="2026-01-20 14:52:28.408510582 +0000 UTC m=+144.218341430" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.416093 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" event={"ID":"8169cee8-7942-4c7f-92bd-f89e4b027b83","Type":"ContainerStarted","Data":"ac36a34f4a6302371da08b626f0511942ccffe7f7de56791bb78a7fc7179a13d"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.409781 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.416917 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.417240 4949 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qw6xk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.417262 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" podUID="0b89af20-11f6-4e88-8b1c-5e5ff5b47a70" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.426790 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.432641 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" podStartSLOduration=122.432586792 podStartE2EDuration="2m2.432586792s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.423496569 +0000 UTC m=+144.233327447" watchObservedRunningTime="2026-01-20 14:52:28.432586792 +0000 UTC m=+144.242417650" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.441205 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" podStartSLOduration=122.441185127 podStartE2EDuration="2m2.441185127s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.438715382 +0000 UTC m=+144.248546240" watchObservedRunningTime="2026-01-20 14:52:28.441185127 +0000 UTC m=+144.251015985" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.453423 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.453603 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.953577494 +0000 UTC m=+144.763408352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.455493 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.456704 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" podStartSLOduration=122.456687261 podStartE2EDuration="2m2.456687261s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.453258673 +0000 UTC m=+144.263089541" watchObservedRunningTime="2026-01-20 14:52:28.456687261 +0000 UTC m=+144.266518119" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.458475 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.958461932 +0000 UTC m=+144.768292790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.473751 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bb9s9" podStartSLOduration=122.473731118 podStartE2EDuration="2m2.473731118s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.47205552 +0000 UTC m=+144.281886378" watchObservedRunningTime="2026-01-20 14:52:28.473731118 +0000 UTC m=+144.283561976" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.493419 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" podStartSLOduration=122.493399376 podStartE2EDuration="2m2.493399376s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.491039225 +0000 UTC m=+144.300870083" watchObservedRunningTime="2026-01-20 14:52:28.493399376 +0000 UTC m=+144.303230224" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.533057 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" podStartSLOduration=122.533038821 podStartE2EDuration="2m2.533038821s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.531428405 +0000 UTC m=+144.341259263" watchObservedRunningTime="2026-01-20 14:52:28.533038821 +0000 UTC m=+144.342869679" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.557206 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.559896 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.059872104 +0000 UTC m=+144.869703042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.659149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.659550 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.159535346 +0000 UTC m=+144.969366204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.760261 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.760409 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.260390019 +0000 UTC m=+145.070220887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.760924 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.761253 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.261242368 +0000 UTC m=+145.071073226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.862389 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.862536 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.362503065 +0000 UTC m=+145.172333923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.862675 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.863020 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.363009653 +0000 UTC m=+145.172840511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.963629 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.963762 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.463744482 +0000 UTC m=+145.273575340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.964101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.964374 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.464366993 +0000 UTC m=+145.274197851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.997687 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:28 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:28 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:28 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.997760 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.065029 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.065455 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.565436954 +0000 UTC m=+145.375267812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.118913 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.167283 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.168152 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.668131549 +0000 UTC m=+145.477962407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.268485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.268693 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.768667401 +0000 UTC m=+145.578498259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.268862 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.269124 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.769113427 +0000 UTC m=+145.578944285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.370291 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.370470 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.870446006 +0000 UTC m=+145.680276864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.370738 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.371102 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.871089618 +0000 UTC m=+145.680920476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.423983 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" event={"ID":"8169cee8-7942-4c7f-92bd-f89e4b027b83","Type":"ContainerStarted","Data":"8b27e97a3a49b129c1e2c504b175540c2f99106cc06d1e35195868a0b3324986"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.425469 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"13f137922c00c0cdf0032d9d80ba9b1ff8fddb596158409726cf210b5b4cb664"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.427321 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" event={"ID":"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f","Type":"ContainerStarted","Data":"00050e542dc85b15c17d95b44edc637e6dd0cecd210f0f6c9937c9ef4015e132"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.427464 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.430255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" event={"ID":"182137c4-babb-4c69-b53d-d37131c3041a","Type":"ContainerStarted","Data":"451df6a41dcbb24907ee38dbf28eb1ccb074f689112d9838bfda128a97ab0cae"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.432207 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" event={"ID":"97a47ded-8ed0-4c5c-8e53-2ff63413b679","Type":"ContainerStarted","Data":"4db7b9cde5c6b30778f765bc87f2a476bcbb865361bb82e84bd905821869e1f8"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.432329 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.434228 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" event={"ID":"8516de03-2f1a-43e7-8af0-116378f96b8f","Type":"ContainerStarted","Data":"918de21dc6f04b4e7cfdc425de78cd179f7331133bcb46dc030f66710e1b4c50"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.434275 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" event={"ID":"8516de03-2f1a-43e7-8af0-116378f96b8f","Type":"ContainerStarted","Data":"a833e75596fde7deef6100be6af7719851d5b8f4ec76c72cd2aaba4a91b9f2d3"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.436045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" event={"ID":"3fae0085-f1fb-44ed-b871-0e6fe5072006","Type":"ContainerStarted","Data":"274bc8a72730e898c80a9ff7c7ad46ef05b7f1572184473f729a1d7550fc8b43"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.438080 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" event={"ID":"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8","Type":"ContainerStarted","Data":"54d29b1ba35eb7f72b23fc34304c450d006507692651cf5dbd71a87c85cd3e7a"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.439865 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" event={"ID":"c47ecb6d-9ecf-480f-b605-4dd91e900521","Type":"ContainerStarted","Data":"5e7a5fa75e778cef7d12363811f6517dc5566793acad091725ed76bcbeee2345"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.441822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" event={"ID":"e45c974f-4645-4895-9f73-cfd03e798e00","Type":"ContainerStarted","Data":"ac04aaec015ed06b9d108fd834a620cd7794a897c430383cc6ba7d6f0ce22fe4"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.444269 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.444309 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.444944 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j8fgh" event={"ID":"1a0cc344-c778-44a2-a6f6-e2067286c347","Type":"ContainerStarted","Data":"bfa7d01031966304a0f3591c2f8259b9c4c472ac4d4c0c02ed28d908183bd2ea"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.445103 4949 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xxm4k container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.445129 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" podUID="10228b44-2c32-4fab-a4f9-c703ef0b6b39" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.445618 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.459428 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" podStartSLOduration=123.459407489 podStartE2EDuration="2m3.459407489s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.457834185 +0000 UTC m=+145.267665043" watchObservedRunningTime="2026-01-20 14:52:29.459407489 +0000 UTC m=+145.269238347" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.469971 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.475002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.476017 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.975998921 +0000 UTC m=+145.785829779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.513539 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" podStartSLOduration=123.513497021 podStartE2EDuration="2m3.513497021s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.488721368 +0000 UTC m=+145.298552236" watchObservedRunningTime="2026-01-20 14:52:29.513497021 +0000 UTC m=+145.323327879" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.515555 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" podStartSLOduration=123.515540882 podStartE2EDuration="2m3.515540882s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.511389449 +0000 UTC m=+145.321220337" watchObservedRunningTime="2026-01-20 14:52:29.515540882 +0000 UTC m=+145.325371740" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.537656 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" podStartSLOduration=123.537629773 podStartE2EDuration="2m3.537629773s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.535550741 +0000 UTC m=+145.345381629" watchObservedRunningTime="2026-01-20 14:52:29.537629773 +0000 UTC m=+145.347460641" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.577468 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" podStartSLOduration=123.577449534 podStartE2EDuration="2m3.577449534s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.574367938 +0000 UTC m=+145.384198796" watchObservedRunningTime="2026-01-20 14:52:29.577449534 +0000 UTC m=+145.387280392" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.586701 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.586974 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.086964891 +0000 UTC m=+145.896795749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.602555 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-j8fgh" podStartSLOduration=9.602531718 podStartE2EDuration="9.602531718s" podCreationTimestamp="2026-01-20 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.599169362 +0000 UTC m=+145.409000230" watchObservedRunningTime="2026-01-20 14:52:29.602531718 +0000 UTC m=+145.412362576" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.643386 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pcdvd" podStartSLOduration=9.643367914 podStartE2EDuration="9.643367914s" podCreationTimestamp="2026-01-20 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.642322757 +0000 UTC m=+145.452153625" watchObservedRunningTime="2026-01-20 14:52:29.643367914 +0000 UTC m=+145.453198772" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.643754 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" podStartSLOduration=123.643739526 podStartE2EDuration="2m3.643739526s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.626654008 +0000 UTC m=+145.436484866" watchObservedRunningTime="2026-01-20 14:52:29.643739526 +0000 UTC m=+145.453570384" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.686216 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" podStartSLOduration=123.686197159 podStartE2EDuration="2m3.686197159s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.665290638 +0000 UTC m=+145.475121496" watchObservedRunningTime="2026-01-20 14:52:29.686197159 +0000 UTC m=+145.496028017" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.687863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.688124 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.188109864 +0000 UTC m=+145.997940722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.703984 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" podStartSLOduration=123.70396384 podStartE2EDuration="2m3.70396384s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.685609498 +0000 UTC m=+145.495440366" watchObservedRunningTime="2026-01-20 14:52:29.70396384 +0000 UTC m=+145.513794698" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.751405 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" podStartSLOduration=123.751385783 podStartE2EDuration="2m3.751385783s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.749215168 +0000 UTC m=+145.559046026" watchObservedRunningTime="2026-01-20 14:52:29.751385783 +0000 UTC m=+145.561216641" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.752654 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" podStartSLOduration=124.752643496 podStartE2EDuration="2m4.752643496s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.70626322 +0000 UTC m=+145.516094078" watchObservedRunningTime="2026-01-20 14:52:29.752643496 +0000 UTC m=+145.562474354" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.780190 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" podStartSLOduration=123.780168654 podStartE2EDuration="2m3.780168654s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.779709628 +0000 UTC m=+145.589540486" watchObservedRunningTime="2026-01-20 14:52:29.780168654 +0000 UTC m=+145.589999522" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.790372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.790796 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.290777659 +0000 UTC m=+146.100608517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.891997 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.892397 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.392376408 +0000 UTC m=+146.202207266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.892828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.893160 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.393148045 +0000 UTC m=+146.202978903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.993540 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.994039 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.494017868 +0000 UTC m=+146.303848726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:29.999435 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:30 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:30 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:30 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:29.999494 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.094759 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.095103 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.595088068 +0000 UTC m=+146.404918926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.196319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.196786 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.696753259 +0000 UTC m=+146.506584117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.298035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.298343 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.798331317 +0000 UTC m=+146.608162175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.398804 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.399185 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.899167329 +0000 UTC m=+146.708998187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.454710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"51abf2cde66e3374d1216323da07ae7e0617d8bda55b3f0ca19d44fdeabdff67"} Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.457948 4949 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.458075 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.458115 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.504388 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.507639 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.007622413 +0000 UTC m=+146.817453271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.606426 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.606849 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.10682892 +0000 UTC m=+146.916659788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.708230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.708578 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.208564923 +0000 UTC m=+147.018395781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.809068 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.809274 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.3092465 +0000 UTC m=+147.119077348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.809383 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.809891 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.309875881 +0000 UTC m=+147.119706739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.910629 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.910854 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.410822537 +0000 UTC m=+147.220653405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.910913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.911253 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.411242682 +0000 UTC m=+147.221073600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.941584 4949 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T14:52:30.457969644Z","Handler":null,"Name":""} Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.946030 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.946918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.948912 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.950483 4949 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.950535 4949 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.970548 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.996509 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:30 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:30 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:30 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.996591 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.011628 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.011930 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.012132 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.012168 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.015535 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.112891 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.112938 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.112974 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.113000 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.113850 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.114224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.116052 4949 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.116084 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.136097 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.143626 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.144794 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.146102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.154893 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.156307 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.213620 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.213660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.213683 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.260816 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.314951 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.315047 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.315189 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.315562 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.315671 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.336646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.348209 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.352130 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.357292 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.363111 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.364481 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.375325 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.418553 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.418992 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.419046 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.468606 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.479297 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"8ba0a411ba3c4385deefeb5e66683996b46a54845f50557b17601a3469b0487f"} Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.479346 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"3c5a06049a4e031cae18188af6a5d7ab1b23ee4921febe21c4daf20e2466c8d2"} Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.526657 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.526741 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.526799 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.531067 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.531436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.541533 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" podStartSLOduration=11.541505824 podStartE2EDuration="11.541505824s" podCreationTimestamp="2026-01-20 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:31.529445568 +0000 UTC m=+147.339276426" watchObservedRunningTime="2026-01-20 14:52:31.541505824 +0000 UTC m=+147.351336682" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.542807 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.543721 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.556836 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.593383 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.626417 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.629568 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.629645 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.629723 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.701388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.730925 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.731197 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.731251 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.731771 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.732065 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.747769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.870881 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.923214 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:52:31 crc kubenswrapper[4949]: W0120 14:52:31.937865 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf1fd354_0dd7_4186_b8f7_eb06991f4632.slice/crio-e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3 WatchSource:0}: Error finding container e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3: Status 404 returned error can't find the container with id e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3 Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.966246 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:52:31 crc kubenswrapper[4949]: W0120 14:52:31.984191 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod595f245f_676f_4ef1_8073_5e235b4a338a.slice/crio-75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9 WatchSource:0}: Error finding container 75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9: Status 404 returned error can't find the container with id 75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9 Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.999661 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:31 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:31 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:31 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:31.999975 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.056004 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.074387 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.075276 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.079001 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.079260 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.087197 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.141704 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.141791 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.143326 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.243141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.243210 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.243279 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.290060 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.393969 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.437204 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.488152 4949 generic.go:334] "Generic (PLEG): container finished" podID="78cf28ec-e605-49c2-882a-5cb98697605b" containerID="758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.488392 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerDied","Data":"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.488444 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerStarted","Data":"cfc38db22b8953300879f0bf00176a88bf6635c28a6beffd49284a3128d08941"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.492892 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.495452 4949 generic.go:334] "Generic (PLEG): container finished" podID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerID="83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.495494 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerDied","Data":"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.495542 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerStarted","Data":"5386e0b6f5f81c0affeb756c00a742c0370df0824ff74eddb71abeead647e2e6"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.503773 4949 generic.go:334] "Generic (PLEG): container finished" podID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" containerID="f7ccf61b1b533eee3af51392be86e3fc038d228c29c868fd9df44638391dd3bf" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.503854 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" event={"ID":"8c06ab34-4b4e-4047-b32d-e9d36c792b1d","Type":"ContainerDied","Data":"f7ccf61b1b533eee3af51392be86e3fc038d228c29c868fd9df44638391dd3bf"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.509277 4949 generic.go:334] "Generic (PLEG): container finished" podID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerID="e457b0da5f8d7f599c13928f4a9416d0d3623297c6f14359bad682b4ffdc7a4a" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.509357 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerDied","Data":"e457b0da5f8d7f599c13928f4a9416d0d3623297c6f14359bad682b4ffdc7a4a"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.509391 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerStarted","Data":"461db7652293e0c019275b02f84835686fdbeffea7ab03b5ba355fd27be457ec"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.518488 4949 generic.go:334] "Generic (PLEG): container finished" podID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerID="41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.518604 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerDied","Data":"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.518637 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerStarted","Data":"e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.543362 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" event={"ID":"595f245f-676f-4ef1-8073-5e235b4a338a","Type":"ContainerStarted","Data":"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.543747 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" event={"ID":"595f245f-676f-4ef1-8073-5e235b4a338a","Type":"ContainerStarted","Data":"75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.543887 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.548051 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.548111 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.548170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.548201 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.552289 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.554682 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.555137 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.568417 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.636652 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.662529 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" podStartSLOduration=126.662489804 podStartE2EDuration="2m6.662489804s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:32.632005815 +0000 UTC m=+148.441836673" watchObservedRunningTime="2026-01-20 14:52:32.662489804 +0000 UTC m=+148.472320672" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.673391 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.780778 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.782088 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.798869 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.810435 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.918191 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.950051 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.957578 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.957701 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.960940 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.992827 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.998259 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:32 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:32 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:32 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.998309 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.058121 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.058181 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.058290 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: W0120 14:52:33.120726 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-95a7913babc243eb33944862784ff747e52a844e879ac29618e2f010874a0629 WatchSource:0}: Error finding container 95a7913babc243eb33944862784ff747e52a844e879ac29618e2f010874a0629: Status 404 returned error can't find the container with id 95a7913babc243eb33944862784ff747e52a844e879ac29618e2f010874a0629 Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.159488 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.159593 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.159633 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.160009 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.160684 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: W0120 14:52:33.175060 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-cfcb0c9ac1ad96cdb649fcbc577ab730a118eb76f0638527c97b8f6949c80e08 WatchSource:0}: Error finding container cfcb0c9ac1ad96cdb649fcbc577ab730a118eb76f0638527c97b8f6949c80e08: Status 404 returned error can't find the container with id cfcb0c9ac1ad96cdb649fcbc577ab730a118eb76f0638527c97b8f6949c80e08 Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.175454 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.206372 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.206428 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.206480 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.206538 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.214189 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.214244 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.216068 4949 patch_prober.go:28] interesting pod/console-f9d7485db-w9d9r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.216123 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-w9d9r" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.279063 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.340011 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.341024 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.349180 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.362975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.363008 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.363274 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.412753 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.457194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.465006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.465070 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.465133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.466241 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.466735 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.488778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.567113 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"95a7913babc243eb33944862784ff747e52a844e879ac29618e2f010874a0629"} Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.568263 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cfcb0c9ac1ad96cdb649fcbc577ab730a118eb76f0638527c97b8f6949c80e08"} Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.569893 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9b0db30d-95d1-498f-8c18-5dd0a553d48f","Type":"ContainerStarted","Data":"8080bdd092a4ff8f5e40d9fb328f96b240d8ae56e960e70632705b400d1ab276"} Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.571134 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ae626303c8f63ce3416f52b81cac4275bb0c82ef7397eda9495ff514cb6ded8b"} Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.664211 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.694877 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:52:33 crc kubenswrapper[4949]: W0120 14:52:33.701874 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da2cb76_6534_4d77_95c0_3d6aaff0de4b.slice/crio-2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c WatchSource:0}: Error finding container 2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c: Status 404 returned error can't find the container with id 2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.832055 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.870887 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") pod \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.870940 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") pod \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.871041 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") pod \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.871933 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c06ab34-4b4e-4047-b32d-e9d36c792b1d" (UID: "8c06ab34-4b4e-4047-b32d-e9d36c792b1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.878197 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq" (OuterVolumeSpecName: "kube-api-access-nn2mq") pod "8c06ab34-4b4e-4047-b32d-e9d36c792b1d" (UID: "8c06ab34-4b4e-4047-b32d-e9d36c792b1d"). InnerVolumeSpecName "kube-api-access-nn2mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.895732 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c06ab34-4b4e-4047-b32d-e9d36c792b1d" (UID: "8c06ab34-4b4e-4047-b32d-e9d36c792b1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.972763 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.972789 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.972797 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.996933 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:33 crc kubenswrapper[4949]: [+]has-synced ok Jan 20 14:52:33 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:33 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.997062 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.142268 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:52:34 crc kubenswrapper[4949]: E0120 14:52:34.142615 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" containerName="collect-profiles" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.142631 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" containerName="collect-profiles" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.142788 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" containerName="collect-profiles" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.143691 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.143852 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.145872 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.151123 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.279096 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.279570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.280217 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.381195 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.381308 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.381374 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.382296 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.382778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.406805 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.476870 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.544503 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.551682 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.562583 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.597296 4949 generic.go:334] "Generic (PLEG): container finished" podID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerID="66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241" exitCode=0 Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.597373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerDied","Data":"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.597410 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerStarted","Data":"2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.599720 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2cc000bc63b42e439848876226fd338380a4a30edbcad8125873ef756ca46287"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.602651 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" event={"ID":"8c06ab34-4b4e-4047-b32d-e9d36c792b1d","Type":"ContainerDied","Data":"f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.602674 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.602691 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.605231 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"86921bad44dd07af0ecd8d9c11d27e021c1063acd7c271a109b11de4e3de4505"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.608827 4949 generic.go:334] "Generic (PLEG): container finished" podID="9b0db30d-95d1-498f-8c18-5dd0a553d48f" containerID="3ab19178e8e5b3e5d35695a1e839bb6906d49d06b56d49738a525f3707f21354" exitCode=0 Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.608874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9b0db30d-95d1-498f-8c18-5dd0a553d48f","Type":"ContainerDied","Data":"3ab19178e8e5b3e5d35695a1e839bb6906d49d06b56d49738a525f3707f21354"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.636462 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"485f0912a018177bdbfd4999745a51a501edd183b90801dc7a2d128bbf797b1b"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.636867 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.660322 4949 generic.go:334] "Generic (PLEG): container finished" podID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerID="ce75b13bd7f1b8e95f0b7ca8644b4475c13ac79f0a7f60da9f3dac9e11e95a9e" exitCode=0 Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.660370 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerDied","Data":"ce75b13bd7f1b8e95f0b7ca8644b4475c13ac79f0a7f60da9f3dac9e11e95a9e"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.660412 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerStarted","Data":"30a76834740fb17389d4718b7b04b96d874c290be714a19e58cb218d3172d38f"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.686910 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.687020 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.687039 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.797569 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.797865 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.797891 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.800416 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.800627 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.812451 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.829609 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.881222 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.001256 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.003801 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.193980 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.703961 4949 generic.go:334] "Generic (PLEG): container finished" podID="2747a148-c24a-4d08-a2ca-19261c14c359" containerID="46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688" exitCode=0 Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.704040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerDied","Data":"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688"} Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.704363 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerStarted","Data":"c64e483ea895830221bcb3fd9971d012c5d2f19d12679860699582d93fd37367"} Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.719014 4949 generic.go:334] "Generic (PLEG): container finished" podID="13eef670-55b3-4832-a856-fe2bf8239996" containerID="c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166" exitCode=0 Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.719090 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerDied","Data":"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166"} Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.719153 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerStarted","Data":"7033fb6c503e5baf2b93082863e51771e454c06c8d508e3b8282afa6c65fa61f"} Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.166006 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.335136 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") pod \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.335230 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9b0db30d-95d1-498f-8c18-5dd0a553d48f" (UID: "9b0db30d-95d1-498f-8c18-5dd0a553d48f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.335339 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") pod \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.335664 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.343969 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9b0db30d-95d1-498f-8c18-5dd0a553d48f" (UID: "9b0db30d-95d1-498f-8c18-5dd0a553d48f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.436595 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.761143 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9b0db30d-95d1-498f-8c18-5dd0a553d48f","Type":"ContainerDied","Data":"8080bdd092a4ff8f5e40d9fb328f96b240d8ae56e960e70632705b400d1ab276"} Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.761241 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8080bdd092a4ff8f5e40d9fb328f96b240d8ae56e960e70632705b400d1ab276" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.761366 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.696202 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 14:52:37 crc kubenswrapper[4949]: E0120 14:52:37.696467 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0db30d-95d1-498f-8c18-5dd0a553d48f" containerName="pruner" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.696481 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0db30d-95d1-498f-8c18-5dd0a553d48f" containerName="pruner" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.696625 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0db30d-95d1-498f-8c18-5dd0a553d48f" containerName="pruner" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.697072 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.699216 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.699430 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.703485 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.855678 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.855743 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.956877 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.956942 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.957038 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:38 crc kubenswrapper[4949]: I0120 14:52:38.102511 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:38 crc kubenswrapper[4949]: I0120 14:52:38.163939 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:38 crc kubenswrapper[4949]: I0120 14:52:38.321453 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:38 crc kubenswrapper[4949]: I0120 14:52:38.898559 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 14:52:39 crc kubenswrapper[4949]: I0120 14:52:39.962556 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"688f7366-a782-4bc1-af28-3ac607a6e5ee","Type":"ContainerStarted","Data":"49cec00a653f42564610c9cde3991b107affc5b57f5a5a64434b6f8195cffe5a"} Jan 20 14:52:39 crc kubenswrapper[4949]: I0120 14:52:39.962880 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"688f7366-a782-4bc1-af28-3ac607a6e5ee","Type":"ContainerStarted","Data":"c1539a308fd744784b438cad04e9e177327a8b5e8d1d0de143603f94c11340c8"} Jan 20 14:52:40 crc kubenswrapper[4949]: I0120 14:52:40.971869 4949 generic.go:334] "Generic (PLEG): container finished" podID="688f7366-a782-4bc1-af28-3ac607a6e5ee" containerID="49cec00a653f42564610c9cde3991b107affc5b57f5a5a64434b6f8195cffe5a" exitCode=0 Jan 20 14:52:40 crc kubenswrapper[4949]: I0120 14:52:40.971917 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"688f7366-a782-4bc1-af28-3ac607a6e5ee","Type":"ContainerDied","Data":"49cec00a653f42564610c9cde3991b107affc5b57f5a5a64434b6f8195cffe5a"} Jan 20 14:52:43 crc kubenswrapper[4949]: I0120 14:52:43.211538 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:43 crc kubenswrapper[4949]: I0120 14:52:43.249325 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:43 crc kubenswrapper[4949]: I0120 14:52:43.255526 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.720252 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.726994 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.859906 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.924627 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") pod \"688f7366-a782-4bc1-af28-3ac607a6e5ee\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.924865 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") pod \"688f7366-a782-4bc1-af28-3ac607a6e5ee\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.925041 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "688f7366-a782-4bc1-af28-3ac607a6e5ee" (UID: "688f7366-a782-4bc1-af28-3ac607a6e5ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.925483 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.928492 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "688f7366-a782-4bc1-af28-3ac607a6e5ee" (UID: "688f7366-a782-4bc1-af28-3ac607a6e5ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.000133 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.027201 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.044449 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"688f7366-a782-4bc1-af28-3ac607a6e5ee","Type":"ContainerDied","Data":"c1539a308fd744784b438cad04e9e177327a8b5e8d1d0de143603f94c11340c8"} Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.044761 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1539a308fd744784b438cad04e9e177327a8b5e8d1d0de143603f94c11340c8" Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.044505 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:51 crc kubenswrapper[4949]: I0120 14:52:51.354703 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:57 crc kubenswrapper[4949]: I0120 14:52:57.152784 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:52:57 crc kubenswrapper[4949]: I0120 14:52:57.153183 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:53:03 crc kubenswrapper[4949]: I0120 14:53:03.665090 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:53:09 crc kubenswrapper[4949]: E0120 14:53:09.444079 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 14:53:09 crc kubenswrapper[4949]: E0120 14:53:09.444875 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kvhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jpqvc_openshift-marketplace(78cf28ec-e605-49c2-882a-5cb98697605b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:09 crc kubenswrapper[4949]: E0120 14:53:09.446488 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jpqvc" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" Jan 20 14:53:10 crc kubenswrapper[4949]: E0120 14:53:10.618870 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jpqvc" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" Jan 20 14:53:10 crc kubenswrapper[4949]: E0120 14:53:10.678853 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 14:53:10 crc kubenswrapper[4949]: E0120 14:53:10.679036 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89h6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mhc4n_openshift-marketplace(7bad3a1d-1239-429c-b5a5-96f0bc2570ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:10 crc kubenswrapper[4949]: E0120 14:53:10.680230 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mhc4n" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.399105 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.531936 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 14:53:11 crc kubenswrapper[4949]: E0120 14:53:11.532157 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688f7366-a782-4bc1-af28-3ac607a6e5ee" containerName="pruner" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.532168 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="688f7366-a782-4bc1-af28-3ac607a6e5ee" containerName="pruner" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.532271 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="688f7366-a782-4bc1-af28-3ac607a6e5ee" containerName="pruner" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.532716 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.541586 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.541849 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.542932 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.632100 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.632162 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.735137 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.735208 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.735302 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.767750 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.889697 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:12 crc kubenswrapper[4949]: E0120 14:53:12.229239 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mhc4n" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" Jan 20 14:53:12 crc kubenswrapper[4949]: E0120 14:53:12.294735 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 14:53:12 crc kubenswrapper[4949]: E0120 14:53:12.294949 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skxx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qwbjk_openshift-marketplace(461b9e2b-6f01-4719-946b-3c8266281ea4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:12 crc kubenswrapper[4949]: E0120 14:53:12.296141 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qwbjk" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" Jan 20 14:53:12 crc kubenswrapper[4949]: I0120 14:53:12.784277 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.619872 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qwbjk" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.693301 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.693504 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cndjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qnf74_openshift-marketplace(13eef670-55b3-4832-a856-fe2bf8239996): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.694773 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qnf74" podUID="13eef670-55b3-4832-a856-fe2bf8239996" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.717009 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.717509 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89v9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5llwq_openshift-marketplace(df1fd354-0dd7-4186-b8f7-eb06991f4632): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.718831 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5llwq" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.750095 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.750255 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sr2h8_openshift-marketplace(8827d4ac-468d-4ceb-91c1-fb310a00ddcd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.751442 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sr2h8" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.757961 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.758086 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9xwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n6ccj_openshift-marketplace(3da2cb76-6534-4d77-95c0-3d6aaff0de4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.759370 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n6ccj" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.781172 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.781355 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr8jf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-p7gt2_openshift-marketplace(2747a148-c24a-4d08-a2ca-19261c14c359): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.782581 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-p7gt2" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" Jan 20 14:53:15 crc kubenswrapper[4949]: I0120 14:53:15.809250 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hlfls"] Jan 20 14:53:15 crc kubenswrapper[4949]: W0120 14:53:15.822549 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa4eae9d_b492_4fd3_8baf_38ed726d9e4c.slice/crio-7e3560e4d082d172d5c1a8fd5beab27c49c6d48b0ebcf9ec7be6338ae4bffb48 WatchSource:0}: Error finding container 7e3560e4d082d172d5c1a8fd5beab27c49c6d48b0ebcf9ec7be6338ae4bffb48: Status 404 returned error can't find the container with id 7e3560e4d082d172d5c1a8fd5beab27c49c6d48b0ebcf9ec7be6338ae4bffb48 Jan 20 14:53:15 crc kubenswrapper[4949]: I0120 14:53:15.851505 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 14:53:15 crc kubenswrapper[4949]: W0120 14:53:15.852608 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod45fa4b40_fafa_4aad_ac39_41cc0503c52a.slice/crio-64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce WatchSource:0}: Error finding container 64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce: Status 404 returned error can't find the container with id 64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.178709 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hlfls" event={"ID":"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c","Type":"ContainerStarted","Data":"bfcbdf5bb88213505bc39059b435103825dec717d2ce2c1d476ad3022dc63743"} Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.179082 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hlfls" event={"ID":"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c","Type":"ContainerStarted","Data":"7e3560e4d082d172d5c1a8fd5beab27c49c6d48b0ebcf9ec7be6338ae4bffb48"} Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.180913 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"45fa4b40-fafa-4aad-ac39-41cc0503c52a","Type":"ContainerStarted","Data":"64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce"} Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.181321 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qnf74" podUID="13eef670-55b3-4832-a856-fe2bf8239996" Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.181806 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-p7gt2" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.181878 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n6ccj" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.181972 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5llwq" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.182101 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sr2h8" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.289167 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.290055 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.305876 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.395927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.395989 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.396193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.496932 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.497034 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.497066 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.497104 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.497178 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.520928 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.637035 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.025345 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 14:53:17 crc kubenswrapper[4949]: W0120 14:53:17.031436 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7865dcac_fc72_4c7f_bd57_11f1c3bbb404.slice/crio-88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758 WatchSource:0}: Error finding container 88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758: Status 404 returned error can't find the container with id 88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758 Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.187442 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7865dcac-fc72-4c7f-bd57-11f1c3bbb404","Type":"ContainerStarted","Data":"88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758"} Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.189197 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hlfls" event={"ID":"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c","Type":"ContainerStarted","Data":"32959a5386c9d5cf748fe8c258b13e375a60ae0d6ca180890d6058f7fe333898"} Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.190805 4949 generic.go:334] "Generic (PLEG): container finished" podID="45fa4b40-fafa-4aad-ac39-41cc0503c52a" containerID="db5e90400fc32755b747912352b53318f41bc09940e363f2562a5b96d6685824" exitCode=0 Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.190836 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"45fa4b40-fafa-4aad-ac39-41cc0503c52a","Type":"ContainerDied","Data":"db5e90400fc32755b747912352b53318f41bc09940e363f2562a5b96d6685824"} Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.224614 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hlfls" podStartSLOduration=171.224595567 podStartE2EDuration="2m51.224595567s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:53:17.203784451 +0000 UTC m=+193.013615309" watchObservedRunningTime="2026-01-20 14:53:17.224595567 +0000 UTC m=+193.034426425" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.203817 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7865dcac-fc72-4c7f-bd57-11f1c3bbb404","Type":"ContainerStarted","Data":"2962c26b419791e2bd3317ef6ff1beb0505dcf5c1382dca84f101cbe4881711f"} Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.227123 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.226972523 podStartE2EDuration="2.226972523s" podCreationTimestamp="2026-01-20 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:53:18.220257042 +0000 UTC m=+194.030087900" watchObservedRunningTime="2026-01-20 14:53:18.226972523 +0000 UTC m=+194.036803391" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.426292 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.521514 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") pod \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.521586 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") pod \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.522922 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "45fa4b40-fafa-4aad-ac39-41cc0503c52a" (UID: "45fa4b40-fafa-4aad-ac39-41cc0503c52a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.529749 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "45fa4b40-fafa-4aad-ac39-41cc0503c52a" (UID: "45fa4b40-fafa-4aad-ac39-41cc0503c52a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.623165 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.623216 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:19 crc kubenswrapper[4949]: I0120 14:53:19.208946 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"45fa4b40-fafa-4aad-ac39-41cc0503c52a","Type":"ContainerDied","Data":"64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce"} Jan 20 14:53:19 crc kubenswrapper[4949]: I0120 14:53:19.208990 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce" Jan 20 14:53:19 crc kubenswrapper[4949]: I0120 14:53:19.209049 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.152381 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.152903 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.152942 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.153433 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.153537 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28" gracePeriod=600 Jan 20 14:53:28 crc kubenswrapper[4949]: I0120 14:53:28.255803 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28" exitCode=0 Jan 20 14:53:28 crc kubenswrapper[4949]: I0120 14:53:28.255847 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.293083 4949 generic.go:334] "Generic (PLEG): container finished" podID="78cf28ec-e605-49c2-882a-5cb98697605b" containerID="c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32" exitCode=0 Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.293171 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerDied","Data":"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.296292 4949 generic.go:334] "Generic (PLEG): container finished" podID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerID="adcd1b226c49fdd50a51858d8d3008d7b1270b1c8bb63285e139f1716bbba323" exitCode=0 Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.296341 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerDied","Data":"adcd1b226c49fdd50a51858d8d3008d7b1270b1c8bb63285e139f1716bbba323"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.301825 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerStarted","Data":"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.308260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerStarted","Data":"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.311184 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerStarted","Data":"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.313291 4949 generic.go:334] "Generic (PLEG): container finished" podID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerID="d6c186f2453346c7f234db0ae0179a8ca36fa49fbd7dc725635ea4fc974b9ba8" exitCode=0 Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.313371 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerDied","Data":"d6c186f2453346c7f234db0ae0179a8ca36fa49fbd7dc725635ea4fc974b9ba8"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.315705 4949 generic.go:334] "Generic (PLEG): container finished" podID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerID="b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30" exitCode=0 Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.315773 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerDied","Data":"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.334045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.336401 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerStarted","Data":"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.345107 4949 generic.go:334] "Generic (PLEG): container finished" podID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerID="136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6" exitCode=0 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.345187 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerDied","Data":"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.348446 4949 generic.go:334] "Generic (PLEG): container finished" podID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerID="abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6" exitCode=0 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.349599 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerDied","Data":"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.351842 4949 generic.go:334] "Generic (PLEG): container finished" podID="2747a148-c24a-4d08-a2ca-19261c14c359" containerID="24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549" exitCode=0 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.351910 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerDied","Data":"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.354086 4949 generic.go:334] "Generic (PLEG): container finished" podID="13eef670-55b3-4832-a856-fe2bf8239996" containerID="3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421" exitCode=0 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.354126 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerDied","Data":"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.361879 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerStarted","Data":"e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.366366 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerStarted","Data":"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.368600 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerStarted","Data":"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.370771 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerStarted","Data":"fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.412133 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhc4n" podStartSLOduration=2.120435396 podStartE2EDuration="1m5.412115137s" podCreationTimestamp="2026-01-20 14:52:31 +0000 UTC" firstStartedPulling="2026-01-20 14:52:32.516934802 +0000 UTC m=+148.326765660" lastFinishedPulling="2026-01-20 14:53:35.808614543 +0000 UTC m=+211.618445401" observedRunningTime="2026-01-20 14:53:36.409031016 +0000 UTC m=+212.218861874" watchObservedRunningTime="2026-01-20 14:53:36.412115137 +0000 UTC m=+212.221945995" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.426974 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" containerID="cri-o://244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" gracePeriod=15 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.472109 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jpqvc" podStartSLOduration=3.247475246 podStartE2EDuration="1m6.472090356s" podCreationTimestamp="2026-01-20 14:52:30 +0000 UTC" firstStartedPulling="2026-01-20 14:52:32.492652395 +0000 UTC m=+148.302483253" lastFinishedPulling="2026-01-20 14:53:35.717267505 +0000 UTC m=+211.527098363" observedRunningTime="2026-01-20 14:53:36.468264178 +0000 UTC m=+212.278095036" watchObservedRunningTime="2026-01-20 14:53:36.472090356 +0000 UTC m=+212.281921224" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.486202 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwbjk" podStartSLOduration=2.2273982549999998 podStartE2EDuration="1m3.486185863s" podCreationTimestamp="2026-01-20 14:52:33 +0000 UTC" firstStartedPulling="2026-01-20 14:52:34.663280949 +0000 UTC m=+150.473111807" lastFinishedPulling="2026-01-20 14:53:35.922068557 +0000 UTC m=+211.731899415" observedRunningTime="2026-01-20 14:53:36.482608284 +0000 UTC m=+212.292439142" watchObservedRunningTime="2026-01-20 14:53:36.486185863 +0000 UTC m=+212.296016721" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.500509 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5llwq" podStartSLOduration=2.191202157 podStartE2EDuration="1m5.500486378s" podCreationTimestamp="2026-01-20 14:52:31 +0000 UTC" firstStartedPulling="2026-01-20 14:52:32.526884474 +0000 UTC m=+148.336715332" lastFinishedPulling="2026-01-20 14:53:35.836168695 +0000 UTC m=+211.645999553" observedRunningTime="2026-01-20 14:53:36.498353941 +0000 UTC m=+212.308184809" watchObservedRunningTime="2026-01-20 14:53:36.500486378 +0000 UTC m=+212.310317236" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.780978 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.814230 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-84cc499644-bbq66"] Jan 20 14:53:36 crc kubenswrapper[4949]: E0120 14:53:36.814727 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.814808 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" Jan 20 14:53:36 crc kubenswrapper[4949]: E0120 14:53:36.814884 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fa4b40-fafa-4aad-ac39-41cc0503c52a" containerName="pruner" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.816188 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fa4b40-fafa-4aad-ac39-41cc0503c52a" containerName="pruner" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.816425 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fa4b40-fafa-4aad-ac39-41cc0503c52a" containerName="pruner" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.816508 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.816997 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.858368 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84cc499644-bbq66"] Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.869931 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.869966 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.869990 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870151 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870210 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870232 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870253 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870267 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870298 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870315 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870333 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870367 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870389 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870533 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mj4h\" (UniqueName: \"kubernetes.io/projected/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-kube-api-access-8mj4h\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870561 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870581 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-session\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870616 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870641 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870657 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-dir\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-error\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870698 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-login\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870712 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870729 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-policies\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870797 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.871148 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.871866 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.872194 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.872721 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.872830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.876188 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.876534 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.877580 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.877776 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.878026 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.878094 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.879507 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns" (OuterVolumeSpecName: "kube-api-access-scxns") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "kube-api-access-scxns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.879639 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.891985 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.972822 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mj4h\" (UniqueName: \"kubernetes.io/projected/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-kube-api-access-8mj4h\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.972969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.972997 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-session\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973014 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973061 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973079 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-dir\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-error\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973132 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973148 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-login\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973166 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973184 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973215 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-policies\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973241 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973298 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973308 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973319 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973330 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973339 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973348 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973358 4949 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973366 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973375 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973384 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973393 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973402 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973413 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973422 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973658 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-dir\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.974728 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.975456 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-policies\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.976011 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.977137 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.978282 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-login\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.979895 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.982866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-error\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.982975 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-session\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.983156 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.983428 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.983621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.983921 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.999869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mj4h\" (UniqueName: \"kubernetes.io/projected/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-kube-api-access-8mj4h\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.136198 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.356868 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84cc499644-bbq66"] Jan 20 14:53:37 crc kubenswrapper[4949]: W0120 14:53:37.361635 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9029eb7_d052_4ee9_a01a_3bef83bcf99c.slice/crio-0707e5f6404816d7de4b2cec7591bc0b8ce1d091d315982142ca611079d790a9 WatchSource:0}: Error finding container 0707e5f6404816d7de4b2cec7591bc0b8ce1d091d315982142ca611079d790a9: Status 404 returned error can't find the container with id 0707e5f6404816d7de4b2cec7591bc0b8ce1d091d315982142ca611079d790a9 Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.396555 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerStarted","Data":"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192"} Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.397322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" event={"ID":"f9029eb7-d052-4ee9-a01a-3bef83bcf99c","Type":"ContainerStarted","Data":"0707e5f6404816d7de4b2cec7591bc0b8ce1d091d315982142ca611079d790a9"} Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400075 4949 generic.go:334] "Generic (PLEG): container finished" podID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerID="244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" exitCode=0 Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400114 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" event={"ID":"45bacc20-7998-4250-bbd3-fd1d24741ea7","Type":"ContainerDied","Data":"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a"} Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400136 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" event={"ID":"45bacc20-7998-4250-bbd3-fd1d24741ea7","Type":"ContainerDied","Data":"d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f"} Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400154 4949 scope.go:117] "RemoveContainer" containerID="244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400299 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.428968 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.434892 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.451124 4949 scope.go:117] "RemoveContainer" containerID="244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" Jan 20 14:53:37 crc kubenswrapper[4949]: E0120 14:53:37.451891 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a\": container with ID starting with 244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a not found: ID does not exist" containerID="244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.451929 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a"} err="failed to get container status \"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a\": rpc error: code = NotFound desc = could not find container \"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a\": container with ID starting with 244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a not found: ID does not exist" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.413476 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerStarted","Data":"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a"} Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.415207 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerStarted","Data":"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c"} Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.418157 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerStarted","Data":"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6"} Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.419629 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" event={"ID":"f9029eb7-d052-4ee9-a01a-3bef83bcf99c","Type":"ContainerStarted","Data":"c54c6c4f3af2abe5a9933a85f07ad19a7d6ba1c7790d6aa7bce1393fcc21b177"} Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.419774 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.423980 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.431080 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sr2h8" podStartSLOduration=2.902769578 podStartE2EDuration="1m7.431065421s" podCreationTimestamp="2026-01-20 14:52:31 +0000 UTC" firstStartedPulling="2026-01-20 14:52:32.496569081 +0000 UTC m=+148.306399939" lastFinishedPulling="2026-01-20 14:53:37.024864924 +0000 UTC m=+212.834695782" observedRunningTime="2026-01-20 14:53:38.430735299 +0000 UTC m=+214.240566177" watchObservedRunningTime="2026-01-20 14:53:38.431065421 +0000 UTC m=+214.240896279" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.449822 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" podStartSLOduration=27.449802167 podStartE2EDuration="27.449802167s" podCreationTimestamp="2026-01-20 14:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:53:38.447033557 +0000 UTC m=+214.256864405" watchObservedRunningTime="2026-01-20 14:53:38.449802167 +0000 UTC m=+214.259633025" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.473431 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7gt2" podStartSLOduration=3.281834644 podStartE2EDuration="1m4.473412966s" podCreationTimestamp="2026-01-20 14:52:34 +0000 UTC" firstStartedPulling="2026-01-20 14:52:35.708759169 +0000 UTC m=+151.518590027" lastFinishedPulling="2026-01-20 14:53:36.900337481 +0000 UTC m=+212.710168349" observedRunningTime="2026-01-20 14:53:38.473318523 +0000 UTC m=+214.283149381" watchObservedRunningTime="2026-01-20 14:53:38.473412966 +0000 UTC m=+214.283243824" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.519567 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6ccj" podStartSLOduration=3.479045637 podStartE2EDuration="1m6.519548017s" podCreationTimestamp="2026-01-20 14:52:32 +0000 UTC" firstStartedPulling="2026-01-20 14:52:34.603871873 +0000 UTC m=+150.413702731" lastFinishedPulling="2026-01-20 14:53:37.644374253 +0000 UTC m=+213.454205111" observedRunningTime="2026-01-20 14:53:38.499035408 +0000 UTC m=+214.308866266" watchObservedRunningTime="2026-01-20 14:53:38.519548017 +0000 UTC m=+214.329378875" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.520996 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qnf74" podStartSLOduration=3.218602509 podStartE2EDuration="1m4.520990879s" podCreationTimestamp="2026-01-20 14:52:34 +0000 UTC" firstStartedPulling="2026-01-20 14:52:35.72100363 +0000 UTC m=+151.530834488" lastFinishedPulling="2026-01-20 14:53:37.023392 +0000 UTC m=+212.833222858" observedRunningTime="2026-01-20 14:53:38.519648571 +0000 UTC m=+214.329479429" watchObservedRunningTime="2026-01-20 14:53:38.520990879 +0000 UTC m=+214.330821737" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.795662 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" path="/var/lib/kubelet/pods/45bacc20-7998-4250-bbd3-fd1d24741ea7/volumes" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.261128 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.261538 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.362990 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.469847 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.469898 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.479207 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.545030 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.701889 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.702156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.737145 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.871809 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.872061 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.909952 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:42 crc kubenswrapper[4949]: I0120 14:53:42.491531 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:42 crc kubenswrapper[4949]: I0120 14:53:42.494447 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:53:42 crc kubenswrapper[4949]: I0120 14:53:42.507825 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.279136 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.279194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.335870 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.499850 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.664999 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.665050 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.716012 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.477543 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.477948 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.497783 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.535335 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.620095 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.882318 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.882709 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.460084 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5llwq" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="registry-server" containerID="cri-o://fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" gracePeriod=2 Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.502760 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.619127 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.619619 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhc4n" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="registry-server" containerID="cri-o://e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe" gracePeriod=2 Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.797032 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.907024 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") pod \"df1fd354-0dd7-4186-b8f7-eb06991f4632\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.907127 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") pod \"df1fd354-0dd7-4186-b8f7-eb06991f4632\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.907206 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") pod \"df1fd354-0dd7-4186-b8f7-eb06991f4632\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.908868 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities" (OuterVolumeSpecName: "utilities") pod "df1fd354-0dd7-4186-b8f7-eb06991f4632" (UID: "df1fd354-0dd7-4186-b8f7-eb06991f4632"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.913794 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v" (OuterVolumeSpecName: "kube-api-access-89v9v") pod "df1fd354-0dd7-4186-b8f7-eb06991f4632" (UID: "df1fd354-0dd7-4186-b8f7-eb06991f4632"). InnerVolumeSpecName "kube-api-access-89v9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.926225 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qnf74" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" probeResult="failure" output=< Jan 20 14:53:45 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 14:53:45 crc kubenswrapper[4949]: > Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.955343 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df1fd354-0dd7-4186-b8f7-eb06991f4632" (UID: "df1fd354-0dd7-4186-b8f7-eb06991f4632"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.009053 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.009097 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.009108 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.474005 4949 generic.go:334] "Generic (PLEG): container finished" podID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerID="e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe" exitCode=0 Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.474120 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerDied","Data":"e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe"} Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477599 4949 generic.go:334] "Generic (PLEG): container finished" podID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerID="fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" exitCode=0 Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477689 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerDied","Data":"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35"} Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477727 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477767 4949 scope.go:117] "RemoveContainer" containerID="fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477751 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerDied","Data":"e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3"} Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.511969 4949 scope.go:117] "RemoveContainer" containerID="b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.520404 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.523774 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.550206 4949 scope.go:117] "RemoveContainer" containerID="41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.569191 4949 scope.go:117] "RemoveContainer" containerID="fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" Jan 20 14:53:46 crc kubenswrapper[4949]: E0120 14:53:46.570068 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35\": container with ID starting with fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35 not found: ID does not exist" containerID="fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.570105 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35"} err="failed to get container status \"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35\": rpc error: code = NotFound desc = could not find container \"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35\": container with ID starting with fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35 not found: ID does not exist" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.570132 4949 scope.go:117] "RemoveContainer" containerID="b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30" Jan 20 14:53:46 crc kubenswrapper[4949]: E0120 14:53:46.570681 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30\": container with ID starting with b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30 not found: ID does not exist" containerID="b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.570720 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30"} err="failed to get container status \"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30\": rpc error: code = NotFound desc = could not find container \"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30\": container with ID starting with b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30 not found: ID does not exist" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.570749 4949 scope.go:117] "RemoveContainer" containerID="41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90" Jan 20 14:53:46 crc kubenswrapper[4949]: E0120 14:53:46.573895 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90\": container with ID starting with 41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90 not found: ID does not exist" containerID="41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.573936 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90"} err="failed to get container status \"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90\": rpc error: code = NotFound desc = could not find container \"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90\": container with ID starting with 41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90 not found: ID does not exist" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.574190 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.717205 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") pod \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.717253 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") pod \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.717362 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") pod \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.719179 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities" (OuterVolumeSpecName: "utilities") pod "7bad3a1d-1239-429c-b5a5-96f0bc2570ad" (UID: "7bad3a1d-1239-429c-b5a5-96f0bc2570ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.722248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r" (OuterVolumeSpecName: "kube-api-access-89h6r") pod "7bad3a1d-1239-429c-b5a5-96f0bc2570ad" (UID: "7bad3a1d-1239-429c-b5a5-96f0bc2570ad"). InnerVolumeSpecName "kube-api-access-89h6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.758648 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bad3a1d-1239-429c-b5a5-96f0bc2570ad" (UID: "7bad3a1d-1239-429c-b5a5-96f0bc2570ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.797940 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" path="/var/lib/kubelet/pods/df1fd354-0dd7-4186-b8f7-eb06991f4632/volumes" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.818622 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.818661 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.818671 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.022912 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.023247 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwbjk" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="registry-server" containerID="cri-o://fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03" gracePeriod=2 Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.484617 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerDied","Data":"461db7652293e0c019275b02f84835686fdbeffea7ab03b5ba355fd27be457ec"} Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.484632 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.485033 4949 scope.go:117] "RemoveContainer" containerID="e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.489788 4949 generic.go:334] "Generic (PLEG): container finished" podID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerID="fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03" exitCode=0 Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.489834 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerDied","Data":"fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03"} Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.501565 4949 scope.go:117] "RemoveContainer" containerID="d6c186f2453346c7f234db0ae0179a8ca36fa49fbd7dc725635ea4fc974b9ba8" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.507422 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.511298 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.520717 4949 scope.go:117] "RemoveContainer" containerID="e457b0da5f8d7f599c13928f4a9416d0d3623297c6f14359bad682b4ffdc7a4a" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.888064 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.037214 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") pod \"461b9e2b-6f01-4719-946b-3c8266281ea4\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.037304 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") pod \"461b9e2b-6f01-4719-946b-3c8266281ea4\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.037411 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") pod \"461b9e2b-6f01-4719-946b-3c8266281ea4\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.038868 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities" (OuterVolumeSpecName: "utilities") pod "461b9e2b-6f01-4719-946b-3c8266281ea4" (UID: "461b9e2b-6f01-4719-946b-3c8266281ea4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.045688 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8" (OuterVolumeSpecName: "kube-api-access-skxx8") pod "461b9e2b-6f01-4719-946b-3c8266281ea4" (UID: "461b9e2b-6f01-4719-946b-3c8266281ea4"). InnerVolumeSpecName "kube-api-access-skxx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.061187 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "461b9e2b-6f01-4719-946b-3c8266281ea4" (UID: "461b9e2b-6f01-4719-946b-3c8266281ea4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.139484 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.139584 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.139605 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.499428 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerDied","Data":"30a76834740fb17389d4718b7b04b96d874c290be714a19e58cb218d3172d38f"} Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.499578 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.499764 4949 scope.go:117] "RemoveContainer" containerID="fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.513382 4949 scope.go:117] "RemoveContainer" containerID="adcd1b226c49fdd50a51858d8d3008d7b1270b1c8bb63285e139f1716bbba323" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.526777 4949 scope.go:117] "RemoveContainer" containerID="ce75b13bd7f1b8e95f0b7ca8644b4475c13ac79f0a7f60da9f3dac9e11e95a9e" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.577342 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.611404 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.797960 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" path="/var/lib/kubelet/pods/461b9e2b-6f01-4719-946b-3c8266281ea4/volumes" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.799825 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" path="/var/lib/kubelet/pods/7bad3a1d-1239-429c-b5a5-96f0bc2570ad/volumes" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835170 4949 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835831 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835847 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835861 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835869 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835881 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835889 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835904 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835912 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835920 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835928 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835937 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835944 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835953 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835960 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835971 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835980 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835993 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836001 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836125 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836141 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836159 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836673 4949 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836819 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836852 4949 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837002 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837025 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837105 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837164 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837191 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837430 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837680 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837705 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837713 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837745 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837755 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837766 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837773 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837789 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837871 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837887 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837894 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838032 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838045 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838059 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838070 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838083 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.839862 4949 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.876053 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922139 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922201 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922255 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922275 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922307 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922343 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922434 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922470 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.942824 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.944066 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.944230 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.984305 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.984941 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.985295 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023478 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023585 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023663 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023673 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023743 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023742 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023790 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023930 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023950 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023986 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.024075 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.024102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.173042 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: W0120 14:53:55.197712 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-9ec4db1631e16dc8853c6af9c177b0ba181f8a98d3a03603dce7e25f7aac9471 WatchSource:0}: Error finding container 9ec4db1631e16dc8853c6af9c177b0ba181f8a98d3a03603dce7e25f7aac9471: Status 404 returned error can't find the container with id 9ec4db1631e16dc8853c6af9c177b0ba181f8a98d3a03603dce7e25f7aac9471 Jan 20 14:53:55 crc kubenswrapper[4949]: E0120 14:53:55.204980 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c7820a1e677d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 14:53:55.200284633 +0000 UTC m=+231.010115491,LastTimestamp:2026-01-20 14:53:55.200284633 +0000 UTC m=+231.010115491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.537639 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.538582 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3" exitCode=2 Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.539945 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9ec4db1631e16dc8853c6af9c177b0ba181f8a98d3a03603dce7e25f7aac9471"} Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.550298 4949 generic.go:334] "Generic (PLEG): container finished" podID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" containerID="2962c26b419791e2bd3317ef6ff1beb0505dcf5c1382dca84f101cbe4881711f" exitCode=0 Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.550436 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7865dcac-fc72-4c7f-bd57-11f1c3bbb404","Type":"ContainerDied","Data":"2962c26b419791e2bd3317ef6ff1beb0505dcf5c1382dca84f101cbe4881711f"} Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.551630 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.552184 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.552560 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.555574 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.556308 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475" exitCode=0 Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.556347 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37" exitCode=0 Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.556367 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153" exitCode=0 Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.564373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57"} Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.565140 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.565930 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.566671 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.569189 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.570344 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8" exitCode=0 Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.779374 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.780658 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.781036 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.781371 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.785335 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.786381 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.786752 4949 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.787020 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.787307 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.787585 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.859870 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") pod \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.859987 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") pod \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860044 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.859980 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock" (OuterVolumeSpecName: "var-lock") pod "7865dcac-fc72-4c7f-bd57-11f1c3bbb404" (UID: "7865dcac-fc72-4c7f-bd57-11f1c3bbb404"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860104 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860083 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7865dcac-fc72-4c7f-bd57-11f1c3bbb404" (UID: "7865dcac-fc72-4c7f-bd57-11f1c3bbb404"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860112 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860137 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") pod \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860085 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860387 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860410 4949 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860422 4949 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860433 4949 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860422 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.866960 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7865dcac-fc72-4c7f-bd57-11f1c3bbb404" (UID: "7865dcac-fc72-4c7f-bd57-11f1c3bbb404"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.961686 4949 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.961715 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.578557 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7865dcac-fc72-4c7f-bd57-11f1c3bbb404","Type":"ContainerDied","Data":"88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758"} Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.578627 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.578675 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.582890 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.584669 4949 scope.go:117] "RemoveContainer" containerID="06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.584677 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.591684 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.592120 4949 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.592644 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.593134 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.602048 4949 scope.go:117] "RemoveContainer" containerID="04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.609073 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.609503 4949 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.609798 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.610012 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.617388 4949 scope.go:117] "RemoveContainer" containerID="7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.629685 4949 scope.go:117] "RemoveContainer" containerID="903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.640091 4949 scope.go:117] "RemoveContainer" containerID="345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.653191 4949 scope.go:117] "RemoveContainer" containerID="720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.797347 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 20 14:53:59 crc kubenswrapper[4949]: E0120 14:53:59.906013 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c7820a1e677d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 14:53:55.200284633 +0000 UTC m=+231.010115491,LastTimestamp:2026-01-20 14:53:55.200284633 +0000 UTC m=+231.010115491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.335249 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:54:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:54:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:54:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:54:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336140 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336330 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336466 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336648 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336660 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:54:04 crc kubenswrapper[4949]: I0120 14:54:04.791189 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: I0120 14:54:04.795129 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: I0120 14:54:04.795696 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.977691 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.978308 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.978656 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.979080 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.979359 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: I0120 14:54:04.979385 4949 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.979804 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="200ms" Jan 20 14:54:05 crc kubenswrapper[4949]: E0120 14:54:05.180671 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="400ms" Jan 20 14:54:05 crc kubenswrapper[4949]: E0120 14:54:05.581692 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="800ms" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.788407 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.789257 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.791087 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.791529 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.812425 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.812697 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:05 crc kubenswrapper[4949]: E0120 14:54:05.813153 4949 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.813806 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:05 crc kubenswrapper[4949]: W0120 14:54:05.838168 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2983310a42b15219728f319013942b24154fe890aa51f9d84a23329bd4758d23 WatchSource:0}: Error finding container 2983310a42b15219728f319013942b24154fe890aa51f9d84a23329bd4758d23: Status 404 returned error can't find the container with id 2983310a42b15219728f319013942b24154fe890aa51f9d84a23329bd4758d23 Jan 20 14:54:06 crc kubenswrapper[4949]: E0120 14:54:06.383652 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="1.6s" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.634967 4949 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d3dd0c06d401501579935b66e49654c3411ad33eea41fd49d5f4d6cfd85b87e3" exitCode=0 Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635016 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d3dd0c06d401501579935b66e49654c3411ad33eea41fd49d5f4d6cfd85b87e3"} Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635048 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2983310a42b15219728f319013942b24154fe890aa51f9d84a23329bd4758d23"} Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635336 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635350 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:06 crc kubenswrapper[4949]: E0120 14:54:06.635812 4949 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635848 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.636330 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.636674 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:07 crc kubenswrapper[4949]: I0120 14:54:07.652262 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"27bec0e4741efd495c8af96759bd72a036e9c4d7ba91f8b673ab18784c51a5bf"} Jan 20 14:54:07 crc kubenswrapper[4949]: I0120 14:54:07.652741 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2a4b029a423926f16871a3e9269a8618ea80179ba2a549951c5ff5f6bb110614"} Jan 20 14:54:07 crc kubenswrapper[4949]: I0120 14:54:07.652756 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"069731a5c7ef2920f465f5e2789ae4bce0b67035ef4b27fcc6457318153b6cdc"} Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661279 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ace5f3bbe362cb444c944705d47bf10d9deda50903b7882c1a94140063332645"} Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661351 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4f4abaa5724ea799ffe29595239d8d4210fa0cd88877905e6722cffe5ce5f2ec"} Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661464 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661698 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661734 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.665787 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.665861 4949 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981" exitCode=1 Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.665899 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981"} Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.666642 4949 scope.go:117] "RemoveContainer" containerID="97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981" Jan 20 14:54:09 crc kubenswrapper[4949]: I0120 14:54:09.675127 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 14:54:09 crc kubenswrapper[4949]: I0120 14:54:09.675810 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b9354d33fb531351a332748f858821a6ead62d29529834db537a435a144f7ee4"} Jan 20 14:54:10 crc kubenswrapper[4949]: I0120 14:54:10.814748 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:10 crc kubenswrapper[4949]: I0120 14:54:10.814802 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:10 crc kubenswrapper[4949]: I0120 14:54:10.822455 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:11 crc kubenswrapper[4949]: I0120 14:54:11.305908 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:54:13 crc kubenswrapper[4949]: I0120 14:54:13.693794 4949 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:14 crc kubenswrapper[4949]: I0120 14:54:14.700000 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:14 crc kubenswrapper[4949]: I0120 14:54:14.700030 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:14 crc kubenswrapper[4949]: I0120 14:54:14.705110 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:14 crc kubenswrapper[4949]: I0120 14:54:14.811982 4949 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f8985c76-0d54-4daf-bf6c-39514ca3a750" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.704893 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.705243 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.707650 4949 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f8985c76-0d54-4daf-bf6c-39514ca3a750" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.755071 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.759863 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:54:20 crc kubenswrapper[4949]: I0120 14:54:20.037698 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 14:54:20 crc kubenswrapper[4949]: I0120 14:54:20.192840 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 14:54:20 crc kubenswrapper[4949]: I0120 14:54:20.468301 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 14:54:20 crc kubenswrapper[4949]: I0120 14:54:20.710375 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.006993 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.302046 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.723701 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.865757 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.967973 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 14:54:22 crc kubenswrapper[4949]: I0120 14:54:22.039771 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 14:54:22 crc kubenswrapper[4949]: I0120 14:54:22.120928 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 14:54:22 crc kubenswrapper[4949]: I0120 14:54:22.686698 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 14:54:22 crc kubenswrapper[4949]: I0120 14:54:22.976164 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.104175 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.164995 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.332435 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.551786 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.688703 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.758070 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.762878 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 14:54:24 crc kubenswrapper[4949]: I0120 14:54:24.167314 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 14:54:24 crc kubenswrapper[4949]: I0120 14:54:24.707408 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 14:54:24 crc kubenswrapper[4949]: I0120 14:54:24.780102 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.028313 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.354971 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.515925 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.563552 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.750978 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.263399 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.564296 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.745983 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.783601 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.797867 4949 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.895679 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.368318 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.460277 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.614504 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.687381 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.700180 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.701447 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.890598 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.905066 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.083393 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.208810 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.303369 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.348396 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.462419 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.483948 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.507119 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.528932 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.709027 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.743606 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.912668 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.930649 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.978110 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.061695 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.071618 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.165342 4949 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.226777 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.300491 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.386754 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.442765 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.563973 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.629405 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.688825 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.776936 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.823124 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.870153 4949 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.948664 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.037878 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.057861 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.085548 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.171268 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.177228 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.286039 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.468603 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.500258 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.508489 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.514096 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.653802 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.777369 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.862978 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.867183 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.144377 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.186675 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.234209 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.332994 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.410732 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.459479 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.565431 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.571836 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.652084 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.683827 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.790457 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.881380 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.929319 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.939854 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.954780 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.989994 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.021991 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.048625 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.200954 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.287677 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.303112 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.351318 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.432649 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.433741 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.472492 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.488006 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.497697 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.554049 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.611883 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.653434 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.739369 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.772776 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.793856 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.834626 4949 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.838535 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.838501428 podStartE2EDuration="38.838501428s" podCreationTimestamp="2026-01-20 14:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:54:13.724078784 +0000 UTC m=+249.533909642" watchObservedRunningTime="2026-01-20 14:54:32.838501428 +0000 UTC m=+268.648332296" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.840105 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.840155 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.844566 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.863574 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.863555209 podStartE2EDuration="19.863555209s" podCreationTimestamp="2026-01-20 14:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:54:32.862603248 +0000 UTC m=+268.672434146" watchObservedRunningTime="2026-01-20 14:54:32.863555209 +0000 UTC m=+268.673386067" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.904858 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.936694 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.942699 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.966040 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.265261 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.279038 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.291256 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.626940 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.664955 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.730639 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.739979 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.797239 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.921343 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.928423 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.070345 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.101905 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.119334 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.161480 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.285533 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.303500 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.473604 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.605810 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.689851 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.968151 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.968570 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.972742 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.038798 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.054174 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.153750 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.170303 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.197167 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.277714 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.322230 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.341415 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.369637 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.454828 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.512099 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.527139 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.529033 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.562242 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.688011 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.797075 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.799476 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.803366 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.822912 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.854247 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.869321 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.875251 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.892779 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.962008 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.963469 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.979454 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.002291 4949 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.002555 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" gracePeriod=5 Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.259296 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.268807 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.350508 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.353957 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.364585 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.667243 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.716247 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.791964 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.881968 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.035044 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.054501 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.057363 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.064547 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.083089 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.128275 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.258477 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.385428 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.386658 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.468847 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.481450 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.553894 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.675601 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.844760 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.888366 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.919747 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.930592 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.028239 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.028805 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.056352 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.160133 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.162252 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.174563 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.233286 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.248479 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.270015 4949 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.270360 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.387249 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.388227 4949 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.496725 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.517340 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.740995 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.786308 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.843661 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.997824 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.003240 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.092481 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.156769 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.284963 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.304384 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.482171 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.559086 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.567659 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.634178 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.664574 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.724073 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.858035 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.871372 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.873622 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.047656 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.050274 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.274017 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.279219 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.527077 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.548636 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.684248 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.796370 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.801111 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.837708 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.024208 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.114335 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.221002 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.574277 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.574359 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639181 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639248 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639277 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639309 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639316 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639331 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639375 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639382 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639622 4949 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639634 4949 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639641 4949 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639651 4949 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.648214 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.741221 4949 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.773292 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.831444 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.846089 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.846196 4949 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" exitCode=137 Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.846252 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.846270 4949 scope.go:117] "RemoveContainer" containerID="ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.862252 4949 scope.go:117] "RemoveContainer" containerID="ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" Jan 20 14:54:41 crc kubenswrapper[4949]: E0120 14:54:41.862686 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57\": container with ID starting with ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57 not found: ID does not exist" containerID="ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.862737 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57"} err="failed to get container status \"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57\": rpc error: code = NotFound desc = could not find container \"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57\": container with ID starting with ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57 not found: ID does not exist" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.947715 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.120097 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.215375 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.259761 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.263771 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.271566 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.319183 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.585558 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.796434 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.796690 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.805475 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.805530 4949 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c92e26c2-437f-456e-815e-341333febbaa" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.809142 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.809184 4949 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c92e26c2-437f-456e-815e-341333febbaa" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.866491 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.949372 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 14:54:43 crc kubenswrapper[4949]: I0120 14:54:43.507011 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.358070 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.359133 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sr2h8" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="registry-server" containerID="cri-o://09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.378104 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.378465 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jpqvc" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="registry-server" containerID="cri-o://0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.391285 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.391583 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" containerID="cri-o://7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.397812 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.398251 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n6ccj" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="registry-server" containerID="cri-o://469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.407676 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.410817 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7gt2" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="registry-server" containerID="cri-o://c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.423387 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.423698 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qnf74" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" containerID="cri-o://98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435283 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnrps"] Jan 20 14:54:48 crc kubenswrapper[4949]: E0120 14:54:48.435666 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435688 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 14:54:48 crc kubenswrapper[4949]: E0120 14:54:48.435724 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" containerName="installer" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435738 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" containerName="installer" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435940 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" containerName="installer" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435999 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.436657 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.443384 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnrps"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.531007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.531064 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.531166 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvdr\" (UniqueName: \"kubernetes.io/projected/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-kube-api-access-mkvdr\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.632240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvdr\" (UniqueName: \"kubernetes.io/projected/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-kube-api-access-mkvdr\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.632649 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.632682 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.637069 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.640632 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.650899 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvdr\" (UniqueName: \"kubernetes.io/projected/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-kube-api-access-mkvdr\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.868434 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.869130 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.883626 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894657 4949 generic.go:334] "Generic (PLEG): container finished" podID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerID="7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894718 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" event={"ID":"25a072c1-c9a6-4a14-9eee-81f3f967503b","Type":"ContainerDied","Data":"7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894744 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" event={"ID":"25a072c1-c9a6-4a14-9eee-81f3f967503b","Type":"ContainerDied","Data":"37fb91e24d9502fca7001a77a1082aa104b29a70445d3ced18d4a89d50594cce"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894755 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fb91e24d9502fca7001a77a1082aa104b29a70445d3ced18d4a89d50594cce" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894897 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.899090 4949 generic.go:334] "Generic (PLEG): container finished" podID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerID="09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.899181 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerDied","Data":"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.899217 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerDied","Data":"5386e0b6f5f81c0affeb756c00a742c0370df0824ff74eddb71abeead647e2e6"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.899235 4949 scope.go:117] "RemoveContainer" containerID="09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.903822 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.907352 4949 generic.go:334] "Generic (PLEG): container finished" podID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerID="469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.907492 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.907511 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerDied","Data":"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.907578 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerDied","Data":"2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936035 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") pod \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936079 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") pod \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936103 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") pod \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936133 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") pod \"2747a148-c24a-4d08-a2ca-19261c14c359\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936159 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") pod \"78cf28ec-e605-49c2-882a-5cb98697605b\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936209 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") pod \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936264 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") pod \"78cf28ec-e605-49c2-882a-5cb98697605b\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936321 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") pod \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936351 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") pod \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936379 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") pod \"2747a148-c24a-4d08-a2ca-19261c14c359\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936408 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") pod \"78cf28ec-e605-49c2-882a-5cb98697605b\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936437 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") pod \"2747a148-c24a-4d08-a2ca-19261c14c359\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.937857 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities" (OuterVolumeSpecName: "utilities") pod "3da2cb76-6534-4d77-95c0-3d6aaff0de4b" (UID: "3da2cb76-6534-4d77-95c0-3d6aaff0de4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.975889 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities" (OuterVolumeSpecName: "utilities") pod "78cf28ec-e605-49c2-882a-5cb98697605b" (UID: "78cf28ec-e605-49c2-882a-5cb98697605b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.976139 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities" (OuterVolumeSpecName: "utilities") pod "8827d4ac-468d-4ceb-91c1-fb310a00ddcd" (UID: "8827d4ac-468d-4ceb-91c1-fb310a00ddcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.976917 4949 scope.go:117] "RemoveContainer" containerID="136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977485 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977716 4949 generic.go:334] "Generic (PLEG): container finished" podID="2747a148-c24a-4d08-a2ca-19261c14c359" containerID="c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977770 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerDied","Data":"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977793 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerDied","Data":"c64e483ea895830221bcb3fd9971d012c5d2f19d12679860699582d93fd37367"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977913 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.978598 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities" (OuterVolumeSpecName: "utilities") pod "2747a148-c24a-4d08-a2ca-19261c14c359" (UID: "2747a148-c24a-4d08-a2ca-19261c14c359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.984172 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.984325 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk" (OuterVolumeSpecName: "kube-api-access-5kvhk") pod "78cf28ec-e605-49c2-882a-5cb98697605b" (UID: "78cf28ec-e605-49c2-882a-5cb98697605b"). InnerVolumeSpecName "kube-api-access-5kvhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.984947 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc" (OuterVolumeSpecName: "kube-api-access-n9xwc") pod "3da2cb76-6534-4d77-95c0-3d6aaff0de4b" (UID: "3da2cb76-6534-4d77-95c0-3d6aaff0de4b"). InnerVolumeSpecName "kube-api-access-n9xwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.986844 4949 generic.go:334] "Generic (PLEG): container finished" podID="13eef670-55b3-4832-a856-fe2bf8239996" containerID="98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.986911 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerDied","Data":"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.986942 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerDied","Data":"7033fb6c503e5baf2b93082863e51771e454c06c8d508e3b8282afa6c65fa61f"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.987922 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf" (OuterVolumeSpecName: "kube-api-access-tr8jf") pod "2747a148-c24a-4d08-a2ca-19261c14c359" (UID: "2747a148-c24a-4d08-a2ca-19261c14c359"). InnerVolumeSpecName "kube-api-access-tr8jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.987996 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528" (OuterVolumeSpecName: "kube-api-access-8h528") pod "8827d4ac-468d-4ceb-91c1-fb310a00ddcd" (UID: "8827d4ac-468d-4ceb-91c1-fb310a00ddcd"). InnerVolumeSpecName "kube-api-access-8h528". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.001929 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8827d4ac-468d-4ceb-91c1-fb310a00ddcd" (UID: "8827d4ac-468d-4ceb-91c1-fb310a00ddcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.004992 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3da2cb76-6534-4d77-95c0-3d6aaff0de4b" (UID: "3da2cb76-6534-4d77-95c0-3d6aaff0de4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.012595 4949 generic.go:334] "Generic (PLEG): container finished" podID="78cf28ec-e605-49c2-882a-5cb98697605b" containerID="0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" exitCode=0 Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.012653 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerDied","Data":"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a"} Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.012690 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerDied","Data":"cfc38db22b8953300879f0bf00176a88bf6635c28a6beffd49284a3128d08941"} Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.012810 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.014424 4949 scope.go:117] "RemoveContainer" containerID="83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038052 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") pod \"25a072c1-c9a6-4a14-9eee-81f3f967503b\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038570 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") pod \"13eef670-55b3-4832-a856-fe2bf8239996\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038609 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") pod \"13eef670-55b3-4832-a856-fe2bf8239996\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038641 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") pod \"25a072c1-c9a6-4a14-9eee-81f3f967503b\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038677 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") pod \"25a072c1-c9a6-4a14-9eee-81f3f967503b\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038709 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") pod \"13eef670-55b3-4832-a856-fe2bf8239996\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038909 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038928 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038939 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038948 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038957 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038966 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038974 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038983 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038992 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.039000 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.040574 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78cf28ec-e605-49c2-882a-5cb98697605b" (UID: "78cf28ec-e605-49c2-882a-5cb98697605b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.040746 4949 scope.go:117] "RemoveContainer" containerID="09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.040870 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities" (OuterVolumeSpecName: "utilities") pod "13eef670-55b3-4832-a856-fe2bf8239996" (UID: "13eef670-55b3-4832-a856-fe2bf8239996"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.041805 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "25a072c1-c9a6-4a14-9eee-81f3f967503b" (UID: "25a072c1-c9a6-4a14-9eee-81f3f967503b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.041880 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a\": container with ID starting with 09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a not found: ID does not exist" containerID="09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.041906 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a"} err="failed to get container status \"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a\": rpc error: code = NotFound desc = could not find container \"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a\": container with ID starting with 09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.041929 4949 scope.go:117] "RemoveContainer" containerID="136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.042946 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6\": container with ID starting with 136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6 not found: ID does not exist" containerID="136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.042998 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6"} err="failed to get container status \"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6\": rpc error: code = NotFound desc = could not find container \"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6\": container with ID starting with 136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.043032 4949 scope.go:117] "RemoveContainer" containerID="83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.043312 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8\": container with ID starting with 83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8 not found: ID does not exist" containerID="83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.043337 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8"} err="failed to get container status \"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8\": rpc error: code = NotFound desc = could not find container \"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8\": container with ID starting with 83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.043354 4949 scope.go:117] "RemoveContainer" containerID="469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.045753 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9" (OuterVolumeSpecName: "kube-api-access-v44h9") pod "25a072c1-c9a6-4a14-9eee-81f3f967503b" (UID: "25a072c1-c9a6-4a14-9eee-81f3f967503b"). InnerVolumeSpecName "kube-api-access-v44h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.045981 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq" (OuterVolumeSpecName: "kube-api-access-cndjq") pod "13eef670-55b3-4832-a856-fe2bf8239996" (UID: "13eef670-55b3-4832-a856-fe2bf8239996"). InnerVolumeSpecName "kube-api-access-cndjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.046833 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "25a072c1-c9a6-4a14-9eee-81f3f967503b" (UID: "25a072c1-c9a6-4a14-9eee-81f3f967503b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.058531 4949 scope.go:117] "RemoveContainer" containerID="abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.073908 4949 scope.go:117] "RemoveContainer" containerID="66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091098 4949 scope.go:117] "RemoveContainer" containerID="469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.091447 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c\": container with ID starting with 469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c not found: ID does not exist" containerID="469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091487 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c"} err="failed to get container status \"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c\": rpc error: code = NotFound desc = could not find container \"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c\": container with ID starting with 469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091532 4949 scope.go:117] "RemoveContainer" containerID="abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.091788 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6\": container with ID starting with abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6 not found: ID does not exist" containerID="abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091816 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6"} err="failed to get container status \"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6\": rpc error: code = NotFound desc = could not find container \"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6\": container with ID starting with abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091832 4949 scope.go:117] "RemoveContainer" containerID="66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.092037 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241\": container with ID starting with 66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241 not found: ID does not exist" containerID="66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.092063 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241"} err="failed to get container status \"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241\": rpc error: code = NotFound desc = could not find container \"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241\": container with ID starting with 66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.092080 4949 scope.go:117] "RemoveContainer" containerID="c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.107827 4949 scope.go:117] "RemoveContainer" containerID="24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.129146 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnrps"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.129604 4949 scope.go:117] "RemoveContainer" containerID="46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139873 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139911 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139929 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139942 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139953 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139966 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.143643 4949 scope.go:117] "RemoveContainer" containerID="c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.143897 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192\": container with ID starting with c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192 not found: ID does not exist" containerID="c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.143933 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192"} err="failed to get container status \"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192\": rpc error: code = NotFound desc = could not find container \"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192\": container with ID starting with c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.143953 4949 scope.go:117] "RemoveContainer" containerID="24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.144130 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549\": container with ID starting with 24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549 not found: ID does not exist" containerID="24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.144152 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549"} err="failed to get container status \"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549\": rpc error: code = NotFound desc = could not find container \"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549\": container with ID starting with 24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.144165 4949 scope.go:117] "RemoveContainer" containerID="46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.144402 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688\": container with ID starting with 46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688 not found: ID does not exist" containerID="46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.144429 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688"} err="failed to get container status \"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688\": rpc error: code = NotFound desc = could not find container \"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688\": container with ID starting with 46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.144442 4949 scope.go:117] "RemoveContainer" containerID="98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.148573 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2747a148-c24a-4d08-a2ca-19261c14c359" (UID: "2747a148-c24a-4d08-a2ca-19261c14c359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.157615 4949 scope.go:117] "RemoveContainer" containerID="3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.167064 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13eef670-55b3-4832-a856-fe2bf8239996" (UID: "13eef670-55b3-4832-a856-fe2bf8239996"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.175579 4949 scope.go:117] "RemoveContainer" containerID="c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.196899 4949 scope.go:117] "RemoveContainer" containerID="98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.197321 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6\": container with ID starting with 98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6 not found: ID does not exist" containerID="98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.197363 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6"} err="failed to get container status \"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6\": rpc error: code = NotFound desc = could not find container \"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6\": container with ID starting with 98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.197397 4949 scope.go:117] "RemoveContainer" containerID="3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.197740 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421\": container with ID starting with 3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421 not found: ID does not exist" containerID="3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.197781 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421"} err="failed to get container status \"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421\": rpc error: code = NotFound desc = could not find container \"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421\": container with ID starting with 3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.197814 4949 scope.go:117] "RemoveContainer" containerID="c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.198190 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166\": container with ID starting with c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166 not found: ID does not exist" containerID="c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.198219 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166"} err="failed to get container status \"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166\": rpc error: code = NotFound desc = could not find container \"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166\": container with ID starting with c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.198235 4949 scope.go:117] "RemoveContainer" containerID="0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.222933 4949 scope.go:117] "RemoveContainer" containerID="c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.231837 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.234826 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.240512 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.240547 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.240687 4949 scope.go:117] "RemoveContainer" containerID="758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.255360 4949 scope.go:117] "RemoveContainer" containerID="0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.255900 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a\": container with ID starting with 0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a not found: ID does not exist" containerID="0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.255949 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a"} err="failed to get container status \"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a\": rpc error: code = NotFound desc = could not find container \"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a\": container with ID starting with 0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.255983 4949 scope.go:117] "RemoveContainer" containerID="c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.256386 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32\": container with ID starting with c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32 not found: ID does not exist" containerID="c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.256436 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32"} err="failed to get container status \"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32\": rpc error: code = NotFound desc = could not find container \"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32\": container with ID starting with c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.256475 4949 scope.go:117] "RemoveContainer" containerID="758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.256857 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be\": container with ID starting with 758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be not found: ID does not exist" containerID="758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.256907 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be"} err="failed to get container status \"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be\": rpc error: code = NotFound desc = could not find container \"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be\": container with ID starting with 758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.303836 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.306477 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.337630 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.346760 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.021795 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.023939 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" event={"ID":"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74","Type":"ContainerStarted","Data":"4f19f3364c511a489b321f70c056d1c9670c2fbe97058d4c7fc3369964b75a06"} Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.023973 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" event={"ID":"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74","Type":"ContainerStarted","Data":"ce8c1e2f43751fe26551b9f28e71733bcc9b16a2c90fa80d8883135dae67069f"} Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.024236 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.027975 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.027998 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.032488 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.056332 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" podStartSLOduration=2.056312802 podStartE2EDuration="2.056312802s" podCreationTimestamp="2026-01-20 14:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:54:50.055122023 +0000 UTC m=+285.864952891" watchObservedRunningTime="2026-01-20 14:54:50.056312802 +0000 UTC m=+285.866143660" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.092064 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.097566 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.126564 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.138339 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.149146 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.153564 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.796887 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13eef670-55b3-4832-a856-fe2bf8239996" path="/var/lib/kubelet/pods/13eef670-55b3-4832-a856-fe2bf8239996/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.798733 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" path="/var/lib/kubelet/pods/25a072c1-c9a6-4a14-9eee-81f3f967503b/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.799432 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" path="/var/lib/kubelet/pods/2747a148-c24a-4d08-a2ca-19261c14c359/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.800765 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" path="/var/lib/kubelet/pods/3da2cb76-6534-4d77-95c0-3d6aaff0de4b/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.801640 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" path="/var/lib/kubelet/pods/78cf28ec-e605-49c2-882a-5cb98697605b/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.803053 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" path="/var/lib/kubelet/pods/8827d4ac-468d-4ceb-91c1-fb310a00ddcd/volumes" Jan 20 14:55:04 crc kubenswrapper[4949]: I0120 14:55:04.595879 4949 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.565584 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.566644 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" containerID="cri-o://12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" gracePeriod=30 Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.654972 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.655191 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" containerID="cri-o://c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" gracePeriod=30 Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.986642 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.029459 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.033898 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.034010 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.034047 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.038053 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.038682 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.039585 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config" (OuterVolumeSpecName: "config") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.039871 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.040623 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.040649 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.041001 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca" (OuterVolumeSpecName: "client-ca") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.052993 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n" (OuterVolumeSpecName: "kube-api-access-5x48n") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "kube-api-access-5x48n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.056320 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.141599 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") pod \"086b7727-a8b6-4416-a46e-60e4474e79e2\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.141664 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") pod \"086b7727-a8b6-4416-a46e-60e4474e79e2\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.141723 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") pod \"086b7727-a8b6-4416-a46e-60e4474e79e2\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.141775 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") pod \"086b7727-a8b6-4416-a46e-60e4474e79e2\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142029 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142045 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142057 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142389 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca" (OuterVolumeSpecName: "client-ca") pod "086b7727-a8b6-4416-a46e-60e4474e79e2" (UID: "086b7727-a8b6-4416-a46e-60e4474e79e2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142451 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config" (OuterVolumeSpecName: "config") pod "086b7727-a8b6-4416-a46e-60e4474e79e2" (UID: "086b7727-a8b6-4416-a46e-60e4474e79e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.144619 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "086b7727-a8b6-4416-a46e-60e4474e79e2" (UID: "086b7727-a8b6-4416-a46e-60e4474e79e2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.144688 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l" (OuterVolumeSpecName: "kube-api-access-j2v6l") pod "086b7727-a8b6-4416-a46e-60e4474e79e2" (UID: "086b7727-a8b6-4416-a46e-60e4474e79e2"). InnerVolumeSpecName "kube-api-access-j2v6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.173892 4949 generic.go:334] "Generic (PLEG): container finished" podID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerID="c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" exitCode=0 Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.173951 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.173974 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" event={"ID":"086b7727-a8b6-4416-a46e-60e4474e79e2","Type":"ContainerDied","Data":"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc"} Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.174002 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" event={"ID":"086b7727-a8b6-4416-a46e-60e4474e79e2","Type":"ContainerDied","Data":"6cdd7178026b2587db50c95fe7c40688b8e05cd993d070aa0db4f3a3e9c38e1f"} Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.174018 4949 scope.go:117] "RemoveContainer" containerID="c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.178216 4949 generic.go:334] "Generic (PLEG): container finished" podID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerID="12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" exitCode=0 Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.178418 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" event={"ID":"6278caf6-b4d9-414c-99ed-686de2b23a80","Type":"ContainerDied","Data":"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45"} Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.178446 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" event={"ID":"6278caf6-b4d9-414c-99ed-686de2b23a80","Type":"ContainerDied","Data":"7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70"} Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.178573 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.209889 4949 scope.go:117] "RemoveContainer" containerID="c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" Jan 20 14:55:13 crc kubenswrapper[4949]: E0120 14:55:13.210977 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc\": container with ID starting with c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc not found: ID does not exist" containerID="c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.211018 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc"} err="failed to get container status \"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc\": rpc error: code = NotFound desc = could not find container \"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc\": container with ID starting with c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc not found: ID does not exist" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.211045 4949 scope.go:117] "RemoveContainer" containerID="12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.216558 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.219548 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.227422 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.230305 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.236216 4949 scope.go:117] "RemoveContainer" containerID="12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" Jan 20 14:55:13 crc kubenswrapper[4949]: E0120 14:55:13.236724 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45\": container with ID starting with 12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45 not found: ID does not exist" containerID="12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.236775 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45"} err="failed to get container status \"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45\": rpc error: code = NotFound desc = could not find container \"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45\": container with ID starting with 12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45 not found: ID does not exist" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.243481 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.243542 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.243558 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.243571 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.231313 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232029 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232053 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232066 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232077 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232097 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232108 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232121 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232133 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232151 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232162 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232179 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232189 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232202 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232215 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232230 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232242 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232261 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232272 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232290 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232301 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232317 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232328 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232344 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232355 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232368 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232379 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232401 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232412 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232425 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232436 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232449 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232460 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232476 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232487 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232503 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232537 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232681 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232705 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232724 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232740 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232756 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232767 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232784 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232797 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.233267 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.236957 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.237162 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.237271 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.237606 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.237911 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.238773 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.239706 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.243273 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.244121 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.244423 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.246458 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.246947 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.247133 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.247177 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.247336 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.247355 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.254501 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356437 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356493 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356533 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356624 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356687 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356707 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356757 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356776 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.423247 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.423652 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-76kqf proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" podUID="8f231268-8959-425f-94a1-39d0ec215e63" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.440893 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.441306 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-ms772 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" podUID="284b359a-a00f-4f88-bb5a-cd477997cfe2" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.457969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458026 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458117 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458155 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458205 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458416 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459192 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459270 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459455 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459887 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.473340 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.473354 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.481448 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.481666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.795140 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" path="/var/lib/kubelet/pods/086b7727-a8b6-4416-a46e-60e4474e79e2/volumes" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.796356 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" path="/var/lib/kubelet/pods/6278caf6-b4d9-414c-99ed-686de2b23a80/volumes" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.202163 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.203006 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.219902 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.224671 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.266980 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267037 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267135 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") pod \"284b359a-a00f-4f88-bb5a-cd477997cfe2\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267180 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267230 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") pod \"284b359a-a00f-4f88-bb5a-cd477997cfe2\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267263 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") pod \"284b359a-a00f-4f88-bb5a-cd477997cfe2\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267298 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267334 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") pod \"284b359a-a00f-4f88-bb5a-cd477997cfe2\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269008 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269502 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config" (OuterVolumeSpecName: "config") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269581 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config" (OuterVolumeSpecName: "config") pod "284b359a-a00f-4f88-bb5a-cd477997cfe2" (UID: "284b359a-a00f-4f88-bb5a-cd477997cfe2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269903 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca" (OuterVolumeSpecName: "client-ca") pod "284b359a-a00f-4f88-bb5a-cd477997cfe2" (UID: "284b359a-a00f-4f88-bb5a-cd477997cfe2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.271173 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772" (OuterVolumeSpecName: "kube-api-access-ms772") pod "284b359a-a00f-4f88-bb5a-cd477997cfe2" (UID: "284b359a-a00f-4f88-bb5a-cd477997cfe2"). InnerVolumeSpecName "kube-api-access-ms772". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.273884 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "284b359a-a00f-4f88-bb5a-cd477997cfe2" (UID: "284b359a-a00f-4f88-bb5a-cd477997cfe2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.275304 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf" (OuterVolumeSpecName: "kube-api-access-76kqf") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "kube-api-access-76kqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.275637 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368801 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368846 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368860 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368875 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368888 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368899 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368910 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368921 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368932 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.205922 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.205996 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.247385 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.252159 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.260931 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.262214 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.264761 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.265430 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.265615 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.265472 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.265549 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.267427 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.268038 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.282294 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.285841 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.379871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.379966 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.379991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.380086 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.482005 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.482094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.482197 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.482240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.483499 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.483609 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.489510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.501880 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.581942 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.765018 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.795120 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284b359a-a00f-4f88-bb5a-cd477997cfe2" path="/var/lib/kubelet/pods/284b359a-a00f-4f88-bb5a-cd477997cfe2/volumes" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.795485 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f231268-8959-425f-94a1-39d0ec215e63" path="/var/lib/kubelet/pods/8f231268-8959-425f-94a1-39d0ec215e63/volumes" Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.211874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" event={"ID":"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7","Type":"ContainerStarted","Data":"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936"} Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.212210 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" event={"ID":"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7","Type":"ContainerStarted","Data":"313ea577e8b3ff4c88a60fc56d9fc090a87bd2fe0ebac3dc4712d691702faa51"} Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.212230 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.225595 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" podStartSLOduration=3.225577369 podStartE2EDuration="3.225577369s" podCreationTimestamp="2026-01-20 14:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:55:17.224909178 +0000 UTC m=+313.034740056" watchObservedRunningTime="2026-01-20 14:55:17.225577369 +0000 UTC m=+313.035408237" Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.687076 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.234785 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.236843 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.240454 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.241464 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.241841 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.242118 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.243511 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.245684 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.252985 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.253267 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.322545 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.322865 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.322980 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.323113 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.323192 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424197 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424349 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.425377 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.425655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.426732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.431693 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.443590 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.560259 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.742975 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.230052 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" event={"ID":"210dd9ac-f90d-4fa4-aeca-016173d6bf53","Type":"ContainerStarted","Data":"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea"} Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.230572 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.230611 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" event={"ID":"210dd9ac-f90d-4fa4-aeca-016173d6bf53","Type":"ContainerStarted","Data":"05e85e4adc482ebfc0757de2dd7679466c3f64d1e6473ddc8666c2167f6cdf09"} Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.235092 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.251728 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" podStartSLOduration=6.251707725 podStartE2EDuration="6.251707725s" podCreationTimestamp="2026-01-20 14:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:55:20.247703191 +0000 UTC m=+316.057534139" watchObservedRunningTime="2026-01-20 14:55:20.251707725 +0000 UTC m=+316.061538593" Jan 20 14:55:57 crc kubenswrapper[4949]: I0120 14:55:57.151848 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:55:57 crc kubenswrapper[4949]: I0120 14:55:57.152449 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:56:06 crc kubenswrapper[4949]: I0120 14:56:06.979371 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7lmx"] Jan 20 14:56:06 crc kubenswrapper[4949]: I0120 14:56:06.981188 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:06 crc kubenswrapper[4949]: I0120 14:56:06.994280 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7lmx"] Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154469 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154551 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-trusted-ca\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154580 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84ec1440-abb3-49f3-ae31-abbb980aad98-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154609 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6btm\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-kube-api-access-z6btm\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154642 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84ec1440-abb3-49f3-ae31-abbb980aad98-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154681 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-certificates\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154716 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-bound-sa-token\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-tls\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.180916 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-tls\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256538 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-trusted-ca\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256573 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84ec1440-abb3-49f3-ae31-abbb980aad98-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256605 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6btm\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-kube-api-access-z6btm\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84ec1440-abb3-49f3-ae31-abbb980aad98-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-certificates\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256714 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-bound-sa-token\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.257661 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84ec1440-abb3-49f3-ae31-abbb980aad98-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.258251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-trusted-ca\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.258414 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-certificates\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.262911 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-tls\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.269244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84ec1440-abb3-49f3-ae31-abbb980aad98-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.275388 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-bound-sa-token\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.275942 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6btm\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-kube-api-access-z6btm\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.297908 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.523330 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7lmx"] Jan 20 14:56:08 crc kubenswrapper[4949]: I0120 14:56:08.530753 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" event={"ID":"84ec1440-abb3-49f3-ae31-abbb980aad98","Type":"ContainerStarted","Data":"ee8482ad744df465f8d01bb554a7859977858e6cdc32e54ce822bbb467347510"} Jan 20 14:56:08 crc kubenswrapper[4949]: I0120 14:56:08.531110 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" event={"ID":"84ec1440-abb3-49f3-ae31-abbb980aad98","Type":"ContainerStarted","Data":"ca42e74ebd962a599a4cea7e55d4f726348461d2bc7400b1d3f8a6890e5e25c7"} Jan 20 14:56:08 crc kubenswrapper[4949]: I0120 14:56:08.532058 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:08 crc kubenswrapper[4949]: I0120 14:56:08.552954 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" podStartSLOduration=2.552935817 podStartE2EDuration="2.552935817s" podCreationTimestamp="2026-01-20 14:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:56:08.549898686 +0000 UTC m=+364.359729544" watchObservedRunningTime="2026-01-20 14:56:08.552935817 +0000 UTC m=+364.362766675" Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.588569 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.590316 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerName="controller-manager" containerID="cri-o://fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" gracePeriod=30 Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.598419 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.598688 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerName="route-controller-manager" containerID="cri-o://dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" gracePeriod=30 Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.990560 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.995806 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130197 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130243 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") pod \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130278 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") pod \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130369 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") pod \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130412 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") pod \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130433 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130456 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131095 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config" (OuterVolumeSpecName: "config") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131226 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" (UID: "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131320 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config" (OuterVolumeSpecName: "config") pod "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" (UID: "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131545 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131569 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131586 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131989 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca" (OuterVolumeSpecName: "client-ca") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.132029 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.136792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k" (OuterVolumeSpecName: "kube-api-access-89c8k") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "kube-api-access-89c8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.137034 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.138191 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9" (OuterVolumeSpecName: "kube-api-access-fnrn9") pod "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" (UID: "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7"). InnerVolumeSpecName "kube-api-access-fnrn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.144668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" (UID: "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232645 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232685 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232695 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232706 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232715 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232723 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.565742 4949 generic.go:334] "Generic (PLEG): container finished" podID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerID="dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" exitCode=0 Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.565866 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" event={"ID":"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7","Type":"ContainerDied","Data":"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936"} Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.565900 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" event={"ID":"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7","Type":"ContainerDied","Data":"313ea577e8b3ff4c88a60fc56d9fc090a87bd2fe0ebac3dc4712d691702faa51"} Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.565929 4949 scope.go:117] "RemoveContainer" containerID="dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.566236 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.569201 4949 generic.go:334] "Generic (PLEG): container finished" podID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerID="fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" exitCode=0 Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.569241 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" event={"ID":"210dd9ac-f90d-4fa4-aeca-016173d6bf53","Type":"ContainerDied","Data":"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea"} Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.569270 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" event={"ID":"210dd9ac-f90d-4fa4-aeca-016173d6bf53","Type":"ContainerDied","Data":"05e85e4adc482ebfc0757de2dd7679466c3f64d1e6473ddc8666c2167f6cdf09"} Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.569405 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.588013 4949 scope.go:117] "RemoveContainer" containerID="dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" Jan 20 14:56:13 crc kubenswrapper[4949]: E0120 14:56:13.588598 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936\": container with ID starting with dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936 not found: ID does not exist" containerID="dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.588709 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936"} err="failed to get container status \"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936\": rpc error: code = NotFound desc = could not find container \"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936\": container with ID starting with dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936 not found: ID does not exist" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.588814 4949 scope.go:117] "RemoveContainer" containerID="fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.607489 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.607671 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.610693 4949 scope.go:117] "RemoveContainer" containerID="fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" Jan 20 14:56:13 crc kubenswrapper[4949]: E0120 14:56:13.611176 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea\": container with ID starting with fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea not found: ID does not exist" containerID="fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.611269 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea"} err="failed to get container status \"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea\": rpc error: code = NotFound desc = could not find container \"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea\": container with ID starting with fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea not found: ID does not exist" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.615897 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.620306 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.275651 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb"] Jan 20 14:56:14 crc kubenswrapper[4949]: E0120 14:56:14.276024 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerName="controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.276064 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerName="controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: E0120 14:56:14.276087 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerName="route-controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.276104 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerName="route-controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.276304 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerName="route-controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.276339 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerName="controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.277106 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281131 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281416 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281574 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281694 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281221 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.282018 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm"] Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.282127 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.282914 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.284510 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.284731 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.285073 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.285184 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.285391 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.285493 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.291247 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.292194 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm"] Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.303002 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb"] Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448045 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-client-ca\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448103 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448122 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxg4l\" (UniqueName: \"kubernetes.io/projected/46c5d356-4598-454d-9f32-304c9d1a003f-kube-api-access-sxg4l\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448151 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-client-ca\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448230 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgdsc\" (UniqueName: \"kubernetes.io/projected/f6e86fc1-f82b-4736-af73-a322d2324a73-kube-api-access-mgdsc\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448247 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c5d356-4598-454d-9f32-304c9d1a003f-serving-cert\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-config\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448313 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-config\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e86fc1-f82b-4736-af73-a322d2324a73-serving-cert\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-config\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549751 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-config\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e86fc1-f82b-4736-af73-a322d2324a73-serving-cert\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549851 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-client-ca\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549898 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxg4l\" (UniqueName: \"kubernetes.io/projected/46c5d356-4598-454d-9f32-304c9d1a003f-kube-api-access-sxg4l\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-client-ca\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549965 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgdsc\" (UniqueName: \"kubernetes.io/projected/f6e86fc1-f82b-4736-af73-a322d2324a73-kube-api-access-mgdsc\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549985 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c5d356-4598-454d-9f32-304c9d1a003f-serving-cert\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.551203 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-client-ca\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.551386 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-client-ca\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.551465 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-config\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.552302 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-config\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.554148 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c5d356-4598-454d-9f32-304c9d1a003f-serving-cert\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.555882 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.563310 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e86fc1-f82b-4736-af73-a322d2324a73-serving-cert\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.565847 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgdsc\" (UniqueName: \"kubernetes.io/projected/f6e86fc1-f82b-4736-af73-a322d2324a73-kube-api-access-mgdsc\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.572325 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxg4l\" (UniqueName: \"kubernetes.io/projected/46c5d356-4598-454d-9f32-304c9d1a003f-kube-api-access-sxg4l\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.596439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.608540 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.819157 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" path="/var/lib/kubelet/pods/210dd9ac-f90d-4fa4-aeca-016173d6bf53/volumes" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.822044 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" path="/var/lib/kubelet/pods/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7/volumes" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.841901 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb"] Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.004258 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm"] Jan 20 14:56:15 crc kubenswrapper[4949]: W0120 14:56:15.013824 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e86fc1_f82b_4736_af73_a322d2324a73.slice/crio-97ef7f3097fe40eb944f3a3e554ede457cefca212abc84277e4c8457d36b3477 WatchSource:0}: Error finding container 97ef7f3097fe40eb944f3a3e554ede457cefca212abc84277e4c8457d36b3477: Status 404 returned error can't find the container with id 97ef7f3097fe40eb944f3a3e554ede457cefca212abc84277e4c8457d36b3477 Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.582641 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" event={"ID":"46c5d356-4598-454d-9f32-304c9d1a003f","Type":"ContainerStarted","Data":"68c833d57e2c2677ee7859ae5870b6e92181c9a751d178e0ef6c641c60dd16b3"} Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.582987 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.583005 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" event={"ID":"46c5d356-4598-454d-9f32-304c9d1a003f","Type":"ContainerStarted","Data":"722267d30647c087ea63666ea27eb50b2039f24d55fbec97e3472ddcb3ce46e9"} Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.586335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" event={"ID":"f6e86fc1-f82b-4736-af73-a322d2324a73","Type":"ContainerStarted","Data":"32ef3ca09555aaac38457eb5e80053aeeb9798c783ded45ac91108a98f6d9b21"} Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.586381 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" event={"ID":"f6e86fc1-f82b-4736-af73-a322d2324a73","Type":"ContainerStarted","Data":"97ef7f3097fe40eb944f3a3e554ede457cefca212abc84277e4c8457d36b3477"} Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.586671 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.590669 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.592643 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.629652 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" podStartSLOduration=3.629629261 podStartE2EDuration="3.629629261s" podCreationTimestamp="2026-01-20 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:56:15.626891358 +0000 UTC m=+371.436722226" watchObservedRunningTime="2026-01-20 14:56:15.629629261 +0000 UTC m=+371.439460119" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.707898 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" podStartSLOduration=3.707881048 podStartE2EDuration="3.707881048s" podCreationTimestamp="2026-01-20 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:56:15.705714239 +0000 UTC m=+371.515545097" watchObservedRunningTime="2026-01-20 14:56:15.707881048 +0000 UTC m=+371.517711896" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.779685 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xr695"] Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.783439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.787766 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.805570 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xr695"] Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.956967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw974\" (UniqueName: \"kubernetes.io/projected/090c2072-966d-4848-82fc-c9aecee3d6c8-kube-api-access-fw974\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.957077 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-catalog-content\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.957489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-utilities\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.973304 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kmnv"] Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.974918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.980592 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.990191 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kmnv"] Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-utilities\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059317 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2x7w\" (UniqueName: \"kubernetes.io/projected/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-kube-api-access-s2x7w\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059541 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-utilities\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059651 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw974\" (UniqueName: \"kubernetes.io/projected/090c2072-966d-4848-82fc-c9aecee3d6c8-kube-api-access-fw974\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059724 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-utilities\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059981 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-catalog-content\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.060187 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-catalog-content\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.060725 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-catalog-content\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.091114 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw974\" (UniqueName: \"kubernetes.io/projected/090c2072-966d-4848-82fc-c9aecee3d6c8-kube-api-access-fw974\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.110093 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.161678 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-catalog-content\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.162036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2x7w\" (UniqueName: \"kubernetes.io/projected/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-kube-api-access-s2x7w\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.162082 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-utilities\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.162644 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-catalog-content\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.162661 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-utilities\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.180055 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2x7w\" (UniqueName: \"kubernetes.io/projected/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-kube-api-access-s2x7w\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.336890 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.528885 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xr695"] Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.629759 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerStarted","Data":"ad832ffb9b4f52428f24eb998ee2602b9e5846006b4f7a3c41f9d611764ccd57"} Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.759386 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kmnv"] Jan 20 14:56:22 crc kubenswrapper[4949]: W0120 14:56:22.770766 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55010bf_14fe_4c92_8fe4_d2864bf74ad1.slice/crio-3663a3f11225c6bf308bab7c5f35330e431ac0bbb63bbeb05bfab798f2caaf16 WatchSource:0}: Error finding container 3663a3f11225c6bf308bab7c5f35330e431ac0bbb63bbeb05bfab798f2caaf16: Status 404 returned error can't find the container with id 3663a3f11225c6bf308bab7c5f35330e431ac0bbb63bbeb05bfab798f2caaf16 Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.637896 4949 generic.go:334] "Generic (PLEG): container finished" podID="090c2072-966d-4848-82fc-c9aecee3d6c8" containerID="7b0867ecabd0014be864d1e6f7bf0e03195528db1f305c62167566a01902fbf6" exitCode=0 Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.637978 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerDied","Data":"7b0867ecabd0014be864d1e6f7bf0e03195528db1f305c62167566a01902fbf6"} Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.639703 4949 generic.go:334] "Generic (PLEG): container finished" podID="a55010bf-14fe-4c92-8fe4-d2864bf74ad1" containerID="c5f9596722e2717ba4bbc6e2743e3b28ff8bfe70323955ba96468a2919203377" exitCode=0 Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.639822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmnv" event={"ID":"a55010bf-14fe-4c92-8fe4-d2864bf74ad1","Type":"ContainerDied","Data":"c5f9596722e2717ba4bbc6e2743e3b28ff8bfe70323955ba96468a2919203377"} Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.639848 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmnv" event={"ID":"a55010bf-14fe-4c92-8fe4-d2864bf74ad1","Type":"ContainerStarted","Data":"3663a3f11225c6bf308bab7c5f35330e431ac0bbb63bbeb05bfab798f2caaf16"} Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.166970 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-97g5q"] Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.168615 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.170853 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.176959 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97g5q"] Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.315117 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-utilities\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.315185 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvw4q\" (UniqueName: \"kubernetes.io/projected/3f68902a-0bee-45a6-96c4-b4a80feaba0b-kube-api-access-bvw4q\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.315241 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-catalog-content\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.360852 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cmxfz"] Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.362227 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.364510 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.369905 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmxfz"] Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.417806 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-utilities\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.417913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvw4q\" (UniqueName: \"kubernetes.io/projected/3f68902a-0bee-45a6-96c4-b4a80feaba0b-kube-api-access-bvw4q\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.417948 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-catalog-content\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.419249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-catalog-content\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.419281 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-utilities\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.448423 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvw4q\" (UniqueName: \"kubernetes.io/projected/3f68902a-0bee-45a6-96c4-b4a80feaba0b-kube-api-access-bvw4q\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.519508 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-utilities\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.519643 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccd9\" (UniqueName: \"kubernetes.io/projected/983905b2-cefb-487e-887f-630d669af9ec-kube-api-access-jccd9\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.519698 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-catalog-content\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.532388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.620705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccd9\" (UniqueName: \"kubernetes.io/projected/983905b2-cefb-487e-887f-630d669af9ec-kube-api-access-jccd9\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.620782 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-catalog-content\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.620856 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-utilities\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.621419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-utilities\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.621915 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-catalog-content\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.650452 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccd9\" (UniqueName: \"kubernetes.io/projected/983905b2-cefb-487e-887f-630d669af9ec-kube-api-access-jccd9\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.660917 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerStarted","Data":"c60562e413ae3c6bf4dadc6a21fe6e75f11370a630aa11292bed0315d60cc57d"} Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.677623 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.977496 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97g5q"] Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.079437 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmxfz"] Jan 20 14:56:25 crc kubenswrapper[4949]: W0120 14:56:25.080766 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod983905b2_cefb_487e_887f_630d669af9ec.slice/crio-f58a583d49c7a9d32316a8097c8e7eb7befcec0503d62a153cab7fa6e4d44b46 WatchSource:0}: Error finding container f58a583d49c7a9d32316a8097c8e7eb7befcec0503d62a153cab7fa6e4d44b46: Status 404 returned error can't find the container with id f58a583d49c7a9d32316a8097c8e7eb7befcec0503d62a153cab7fa6e4d44b46 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.669668 4949 generic.go:334] "Generic (PLEG): container finished" podID="a55010bf-14fe-4c92-8fe4-d2864bf74ad1" containerID="6f94d99f7ae815f99c7c8c843c9517e97331040f874fb4ab2e873f25aa66d3c5" exitCode=0 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.669788 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmnv" event={"ID":"a55010bf-14fe-4c92-8fe4-d2864bf74ad1","Type":"ContainerDied","Data":"6f94d99f7ae815f99c7c8c843c9517e97331040f874fb4ab2e873f25aa66d3c5"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.674010 4949 generic.go:334] "Generic (PLEG): container finished" podID="983905b2-cefb-487e-887f-630d669af9ec" containerID="3268808c5d9377a01bda76e9067683e127450dd28bb9c6135711e6825f997d39" exitCode=0 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.674099 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerDied","Data":"3268808c5d9377a01bda76e9067683e127450dd28bb9c6135711e6825f997d39"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.674130 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerStarted","Data":"f58a583d49c7a9d32316a8097c8e7eb7befcec0503d62a153cab7fa6e4d44b46"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.679856 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f68902a-0bee-45a6-96c4-b4a80feaba0b" containerID="776e57ffb05d63b3a8f310d0ea4274d0c2770b33e77f95ca40e90a3aafeb13a0" exitCode=0 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.680080 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97g5q" event={"ID":"3f68902a-0bee-45a6-96c4-b4a80feaba0b","Type":"ContainerDied","Data":"776e57ffb05d63b3a8f310d0ea4274d0c2770b33e77f95ca40e90a3aafeb13a0"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.680128 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97g5q" event={"ID":"3f68902a-0bee-45a6-96c4-b4a80feaba0b","Type":"ContainerStarted","Data":"4df821b1e618830109653783a7423beeaddee5e2149175a44fa61f71dc3770c7"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.682556 4949 generic.go:334] "Generic (PLEG): container finished" podID="090c2072-966d-4848-82fc-c9aecee3d6c8" containerID="c60562e413ae3c6bf4dadc6a21fe6e75f11370a630aa11292bed0315d60cc57d" exitCode=0 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.682600 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerDied","Data":"c60562e413ae3c6bf4dadc6a21fe6e75f11370a630aa11292bed0315d60cc57d"} Jan 20 14:56:26 crc kubenswrapper[4949]: I0120 14:56:26.690405 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerStarted","Data":"a03cb826bb1586f2c70f06f895e417715734e69894da7cda0b67e4ad55710489"} Jan 20 14:56:26 crc kubenswrapper[4949]: I0120 14:56:26.692370 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerStarted","Data":"3454e921dfad33098cfded3c09253ff82449f494846d14a6fc38ed3c8085494d"} Jan 20 14:56:26 crc kubenswrapper[4949]: I0120 14:56:26.713353 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xr695" podStartSLOduration=3.233555087 podStartE2EDuration="5.713333355s" podCreationTimestamp="2026-01-20 14:56:21 +0000 UTC" firstStartedPulling="2026-01-20 14:56:23.640240564 +0000 UTC m=+379.450071422" lastFinishedPulling="2026-01-20 14:56:26.120018782 +0000 UTC m=+381.929849690" observedRunningTime="2026-01-20 14:56:26.711836096 +0000 UTC m=+382.521666954" watchObservedRunningTime="2026-01-20 14:56:26.713333355 +0000 UTC m=+382.523164213" Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.152167 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.152233 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.305660 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.357704 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.705982 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmnv" event={"ID":"a55010bf-14fe-4c92-8fe4-d2864bf74ad1","Type":"ContainerStarted","Data":"4b45631f8d76233b71661d12cb46b94e4103b3108fa69f26be3698b342e57fd5"} Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.707966 4949 generic.go:334] "Generic (PLEG): container finished" podID="983905b2-cefb-487e-887f-630d669af9ec" containerID="3454e921dfad33098cfded3c09253ff82449f494846d14a6fc38ed3c8085494d" exitCode=0 Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.708020 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerDied","Data":"3454e921dfad33098cfded3c09253ff82449f494846d14a6fc38ed3c8085494d"} Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.710077 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f68902a-0bee-45a6-96c4-b4a80feaba0b" containerID="7e212077e453b38deefbdf51b0127bbcfdbccff7a12198434ed1240e59fa9511" exitCode=0 Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.710147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97g5q" event={"ID":"3f68902a-0bee-45a6-96c4-b4a80feaba0b","Type":"ContainerDied","Data":"7e212077e453b38deefbdf51b0127bbcfdbccff7a12198434ed1240e59fa9511"} Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.731096 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kmnv" podStartSLOduration=3.51399369 podStartE2EDuration="6.731077341s" podCreationTimestamp="2026-01-20 14:56:21 +0000 UTC" firstStartedPulling="2026-01-20 14:56:23.641803766 +0000 UTC m=+379.451634634" lastFinishedPulling="2026-01-20 14:56:26.858887427 +0000 UTC m=+382.668718285" observedRunningTime="2026-01-20 14:56:27.725182267 +0000 UTC m=+383.535013125" watchObservedRunningTime="2026-01-20 14:56:27.731077341 +0000 UTC m=+383.540908209" Jan 20 14:56:28 crc kubenswrapper[4949]: I0120 14:56:28.718255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerStarted","Data":"029d90f91b5bd943a69458376c2228d768a98a74a8fa0347e7eacecc3c6d0d9e"} Jan 20 14:56:28 crc kubenswrapper[4949]: I0120 14:56:28.720340 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97g5q" event={"ID":"3f68902a-0bee-45a6-96c4-b4a80feaba0b","Type":"ContainerStarted","Data":"90052cc8e2c2be637a1705466ef64c1f7add3f5cb75ee4d630b16c8860a1c3b3"} Jan 20 14:56:28 crc kubenswrapper[4949]: I0120 14:56:28.744111 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cmxfz" podStartSLOduration=2.211934063 podStartE2EDuration="4.74408214s" podCreationTimestamp="2026-01-20 14:56:24 +0000 UTC" firstStartedPulling="2026-01-20 14:56:25.674772413 +0000 UTC m=+381.484603271" lastFinishedPulling="2026-01-20 14:56:28.20692048 +0000 UTC m=+384.016751348" observedRunningTime="2026-01-20 14:56:28.737177552 +0000 UTC m=+384.547008410" watchObservedRunningTime="2026-01-20 14:56:28.74408214 +0000 UTC m=+384.553913038" Jan 20 14:56:28 crc kubenswrapper[4949]: I0120 14:56:28.754783 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-97g5q" podStartSLOduration=1.992103323 podStartE2EDuration="4.754765013s" podCreationTimestamp="2026-01-20 14:56:24 +0000 UTC" firstStartedPulling="2026-01-20 14:56:25.681099242 +0000 UTC m=+381.490930100" lastFinishedPulling="2026-01-20 14:56:28.443760932 +0000 UTC m=+384.253591790" observedRunningTime="2026-01-20 14:56:28.752160557 +0000 UTC m=+384.561991445" watchObservedRunningTime="2026-01-20 14:56:28.754765013 +0000 UTC m=+384.564595871" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.111501 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.111822 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.173413 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.338006 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.338281 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.388788 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.777420 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.780452 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.533538 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.533893 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.572530 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.678192 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.678235 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.723991 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.795619 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.796106 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:52 crc kubenswrapper[4949]: I0120 14:56:52.392532 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" containerName="registry" containerID="cri-o://acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" gracePeriod=30 Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.629744 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.749399 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.749817 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.749886 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.749956 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.750016 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.750089 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.750145 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.750178 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.751155 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.752004 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.756132 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.756260 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.756962 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.757738 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h" (OuterVolumeSpecName: "kube-api-access-knt8h") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "kube-api-access-knt8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.767066 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.778904 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852382 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852434 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852451 4949 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852463 4949 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852475 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852489 4949 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852501 4949 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878285 4949 generic.go:334] "Generic (PLEG): container finished" podID="595f245f-676f-4ef1-8073-5e235b4a338a" containerID="acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" exitCode=0 Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" event={"ID":"595f245f-676f-4ef1-8073-5e235b4a338a","Type":"ContainerDied","Data":"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f"} Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878360 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878382 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" event={"ID":"595f245f-676f-4ef1-8073-5e235b4a338a","Type":"ContainerDied","Data":"75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9"} Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878399 4949 scope.go:117] "RemoveContainer" containerID="acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.898932 4949 scope.go:117] "RemoveContainer" containerID="acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" Jan 20 14:56:54 crc kubenswrapper[4949]: E0120 14:56:54.899794 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f\": container with ID starting with acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f not found: ID does not exist" containerID="acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.899879 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f"} err="failed to get container status \"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f\": rpc error: code = NotFound desc = could not find container \"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f\": container with ID starting with acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f not found: ID does not exist" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.901884 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.907441 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:56:56 crc kubenswrapper[4949]: I0120 14:56:56.796921 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" path="/var/lib/kubelet/pods/595f245f-676f-4ef1-8073-5e235b4a338a/volumes" Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.152442 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.152559 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.152634 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.153412 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.153510 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d" gracePeriod=600 Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.897216 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d" exitCode=0 Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.897324 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d"} Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.899972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259"} Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.900069 4949 scope.go:117] "RemoveContainer" containerID="575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28" Jan 20 14:58:57 crc kubenswrapper[4949]: I0120 14:58:57.152930 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:58:57 crc kubenswrapper[4949]: I0120 14:58:57.153420 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:59:04 crc kubenswrapper[4949]: I0120 14:59:04.993351 4949 scope.go:117] "RemoveContainer" containerID="7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a" Jan 20 14:59:27 crc kubenswrapper[4949]: I0120 14:59:27.152361 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:59:27 crc kubenswrapper[4949]: I0120 14:59:27.152937 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.152141 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.152776 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.152834 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.153602 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.153693 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259" gracePeriod=600 Jan 20 14:59:58 crc kubenswrapper[4949]: I0120 14:59:58.019197 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259" exitCode=0 Jan 20 14:59:58 crc kubenswrapper[4949]: I0120 14:59:58.019322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259"} Jan 20 14:59:58 crc kubenswrapper[4949]: I0120 14:59:58.019888 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c"} Jan 20 14:59:58 crc kubenswrapper[4949]: I0120 14:59:58.019943 4949 scope.go:117] "RemoveContainer" containerID="c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.171634 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:00:00 crc kubenswrapper[4949]: E0120 15:00:00.172772 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" containerName="registry" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.172793 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" containerName="registry" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.173017 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" containerName="registry" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.174189 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.176284 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.177803 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.179856 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.190776 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.190924 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.190967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.293203 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.293385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.293419 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.295180 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.301771 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.308335 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.491651 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.682323 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:00:00 crc kubenswrapper[4949]: W0120 15:00:00.686083 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod591138ca_7bcb_4584_8089_82e6223d1457.slice/crio-0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d WatchSource:0}: Error finding container 0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d: Status 404 returned error can't find the container with id 0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.041653 4949 generic.go:334] "Generic (PLEG): container finished" podID="591138ca-7bcb-4584-8089-82e6223d1457" containerID="4ff5f836d3d163418d95ceb0986956f845ac79923a1ad3950a5ae54e3538d3fc" exitCode=0 Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.041737 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" event={"ID":"591138ca-7bcb-4584-8089-82e6223d1457","Type":"ContainerDied","Data":"4ff5f836d3d163418d95ceb0986956f845ac79923a1ad3950a5ae54e3538d3fc"} Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.041789 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" event={"ID":"591138ca-7bcb-4584-8089-82e6223d1457","Type":"ContainerStarted","Data":"0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d"} Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.958171 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9x9js"] Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.959308 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.961838 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.962042 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.962604 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-p5pww" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.969069 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-k9xq5"] Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.970152 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.970158 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9x9js"] Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.974401 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-x98t6" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.987684 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wdg2b"] Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.988440 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.990686 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jfpsc" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.993842 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-k9xq5"] Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.016406 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc5kd\" (UniqueName: \"kubernetes.io/projected/512fc928-abb3-4353-9543-be5d35cd8ccd-kube-api-access-jc5kd\") pod \"cert-manager-webhook-687f57d79b-wdg2b\" (UID: \"512fc928-abb3-4353-9543-be5d35cd8ccd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.016461 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk4g7\" (UniqueName: \"kubernetes.io/projected/9cd775b9-2d07-40bb-964c-6e935aa6775a-kube-api-access-rk4g7\") pod \"cert-manager-cainjector-cf98fcc89-9x9js\" (UID: \"9cd775b9-2d07-40bb-964c-6e935aa6775a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.016493 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4rx\" (UniqueName: \"kubernetes.io/projected/1ca44809-a121-411d-8be6-f1a8b879b97f-kube-api-access-cm4rx\") pod \"cert-manager-858654f9db-k9xq5\" (UID: \"1ca44809-a121-411d-8be6-f1a8b879b97f\") " pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.025089 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wdg2b"] Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.117046 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk4g7\" (UniqueName: \"kubernetes.io/projected/9cd775b9-2d07-40bb-964c-6e935aa6775a-kube-api-access-rk4g7\") pod \"cert-manager-cainjector-cf98fcc89-9x9js\" (UID: \"9cd775b9-2d07-40bb-964c-6e935aa6775a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.117274 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4rx\" (UniqueName: \"kubernetes.io/projected/1ca44809-a121-411d-8be6-f1a8b879b97f-kube-api-access-cm4rx\") pod \"cert-manager-858654f9db-k9xq5\" (UID: \"1ca44809-a121-411d-8be6-f1a8b879b97f\") " pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.117325 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc5kd\" (UniqueName: \"kubernetes.io/projected/512fc928-abb3-4353-9543-be5d35cd8ccd-kube-api-access-jc5kd\") pod \"cert-manager-webhook-687f57d79b-wdg2b\" (UID: \"512fc928-abb3-4353-9543-be5d35cd8ccd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.139553 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk4g7\" (UniqueName: \"kubernetes.io/projected/9cd775b9-2d07-40bb-964c-6e935aa6775a-kube-api-access-rk4g7\") pod \"cert-manager-cainjector-cf98fcc89-9x9js\" (UID: \"9cd775b9-2d07-40bb-964c-6e935aa6775a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.140422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc5kd\" (UniqueName: \"kubernetes.io/projected/512fc928-abb3-4353-9543-be5d35cd8ccd-kube-api-access-jc5kd\") pod \"cert-manager-webhook-687f57d79b-wdg2b\" (UID: \"512fc928-abb3-4353-9543-be5d35cd8ccd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.174500 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4rx\" (UniqueName: \"kubernetes.io/projected/1ca44809-a121-411d-8be6-f1a8b879b97f-kube-api-access-cm4rx\") pod \"cert-manager-858654f9db-k9xq5\" (UID: \"1ca44809-a121-411d-8be6-f1a8b879b97f\") " pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.281353 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.283019 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.307751 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.327991 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.420354 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") pod \"591138ca-7bcb-4584-8089-82e6223d1457\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.420737 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") pod \"591138ca-7bcb-4584-8089-82e6223d1457\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.420787 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") pod \"591138ca-7bcb-4584-8089-82e6223d1457\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.421709 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume" (OuterVolumeSpecName: "config-volume") pod "591138ca-7bcb-4584-8089-82e6223d1457" (UID: "591138ca-7bcb-4584-8089-82e6223d1457"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.427664 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "591138ca-7bcb-4584-8089-82e6223d1457" (UID: "591138ca-7bcb-4584-8089-82e6223d1457"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.431044 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k" (OuterVolumeSpecName: "kube-api-access-tws4k") pod "591138ca-7bcb-4584-8089-82e6223d1457" (UID: "591138ca-7bcb-4584-8089-82e6223d1457"). InnerVolumeSpecName "kube-api-access-tws4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.502942 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9x9js"] Jan 20 15:00:02 crc kubenswrapper[4949]: W0120 15:00:02.513625 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd775b9_2d07_40bb_964c_6e935aa6775a.slice/crio-153de12bbc12be8fa95947f68583b010f8b109a93ea41fe8394371a0ef744e6a WatchSource:0}: Error finding container 153de12bbc12be8fa95947f68583b010f8b109a93ea41fe8394371a0ef744e6a: Status 404 returned error can't find the container with id 153de12bbc12be8fa95947f68583b010f8b109a93ea41fe8394371a0ef744e6a Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.516440 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.522241 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.522278 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.522294 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.785859 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wdg2b"] Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.842002 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-k9xq5"] Jan 20 15:00:02 crc kubenswrapper[4949]: W0120 15:00:02.844017 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ca44809_a121_411d_8be6_f1a8b879b97f.slice/crio-e36d75bca981fe1d4cd99832edf34691b2c33538b53a11119dbdfa2cc09253dd WatchSource:0}: Error finding container e36d75bca981fe1d4cd99832edf34691b2c33538b53a11119dbdfa2cc09253dd: Status 404 returned error can't find the container with id e36d75bca981fe1d4cd99832edf34691b2c33538b53a11119dbdfa2cc09253dd Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.065119 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-k9xq5" event={"ID":"1ca44809-a121-411d-8be6-f1a8b879b97f","Type":"ContainerStarted","Data":"e36d75bca981fe1d4cd99832edf34691b2c33538b53a11119dbdfa2cc09253dd"} Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.066567 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" event={"ID":"9cd775b9-2d07-40bb-964c-6e935aa6775a","Type":"ContainerStarted","Data":"153de12bbc12be8fa95947f68583b010f8b109a93ea41fe8394371a0ef744e6a"} Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.069737 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.069747 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" event={"ID":"591138ca-7bcb-4584-8089-82e6223d1457","Type":"ContainerDied","Data":"0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d"} Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.069803 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d" Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.071864 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" event={"ID":"512fc928-abb3-4353-9543-be5d35cd8ccd","Type":"ContainerStarted","Data":"b2ef08dc2eef1aeeb1186c19d25617c0ec7238aba9efb2669767b7d5f22705c9"} Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.098465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-k9xq5" event={"ID":"1ca44809-a121-411d-8be6-f1a8b879b97f","Type":"ContainerStarted","Data":"ad02cc3732e8b65ef671b097d77719b491b40c6f1470dff6d3a65d8c6c422445"} Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.100857 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" event={"ID":"9cd775b9-2d07-40bb-964c-6e935aa6775a","Type":"ContainerStarted","Data":"4beaba6dcab7b8f832b20753978fc52da156336c7d361e1775c8b8e9fd86000e"} Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.104072 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" event={"ID":"512fc928-abb3-4353-9543-be5d35cd8ccd","Type":"ContainerStarted","Data":"44e9f911114192e960f4f274f436b15d26a8e93053969735dd1f61d46c174dee"} Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.104147 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.120508 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-k9xq5" podStartSLOduration=3.06217718 podStartE2EDuration="7.12048422s" podCreationTimestamp="2026-01-20 15:00:01 +0000 UTC" firstStartedPulling="2026-01-20 15:00:02.845745876 +0000 UTC m=+598.655576734" lastFinishedPulling="2026-01-20 15:00:06.904052916 +0000 UTC m=+602.713883774" observedRunningTime="2026-01-20 15:00:08.114449094 +0000 UTC m=+603.924279952" watchObservedRunningTime="2026-01-20 15:00:08.12048422 +0000 UTC m=+603.930315088" Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.135266 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" podStartSLOduration=2.6860658710000003 podStartE2EDuration="7.135221278s" podCreationTimestamp="2026-01-20 15:00:01 +0000 UTC" firstStartedPulling="2026-01-20 15:00:02.51622176 +0000 UTC m=+598.326052618" lastFinishedPulling="2026-01-20 15:00:06.965377167 +0000 UTC m=+602.775208025" observedRunningTime="2026-01-20 15:00:08.134481024 +0000 UTC m=+603.944311882" watchObservedRunningTime="2026-01-20 15:00:08.135221278 +0000 UTC m=+603.945052136" Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.153739 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" podStartSLOduration=3.037491508 podStartE2EDuration="7.153719528s" podCreationTimestamp="2026-01-20 15:00:01 +0000 UTC" firstStartedPulling="2026-01-20 15:00:02.788215629 +0000 UTC m=+598.598046477" lastFinishedPulling="2026-01-20 15:00:06.904443639 +0000 UTC m=+602.714274497" observedRunningTime="2026-01-20 15:00:08.149167291 +0000 UTC m=+603.958998169" watchObservedRunningTime="2026-01-20 15:00:08.153719528 +0000 UTC m=+603.963550406" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.560344 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6zd5"] Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561048 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-controller" containerID="cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561456 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="sbdb" containerID="cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561534 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="nbdb" containerID="cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561593 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="northd" containerID="cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561637 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561681 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-node" containerID="cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561718 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-acl-logging" containerID="cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.600768 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" containerID="cri-o://c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.877552 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.880073 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovn-acl-logging/0.log" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.880792 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovn-controller/0.log" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.881200 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935453 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mxmf"] Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935686 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935702 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935710 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="nbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935718 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="nbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935726 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kubecfg-setup" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935732 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kubecfg-setup" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935738 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935744 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935751 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935759 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935769 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935775 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935782 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="sbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935790 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="sbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935803 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935811 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935819 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935826 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935836 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-acl-logging" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935843 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-acl-logging" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935852 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-node" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935860 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-node" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935866 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="northd" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935873 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="northd" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935882 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591138ca-7bcb-4584-8089-82e6223d1457" containerName="collect-profiles" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935887 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="591138ca-7bcb-4584-8089-82e6223d1457" containerName="collect-profiles" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935972 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="nbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935982 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935991 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935998 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="591138ca-7bcb-4584-8089-82e6223d1457" containerName="collect-profiles" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936004 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-node" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936012 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936018 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936024 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-acl-logging" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936032 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936042 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="sbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936048 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="northd" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.936124 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936132 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936215 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936378 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.937662 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.939687 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-kubelet\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940184 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-config\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940217 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-script-lib\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940242 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovn-node-metrics-cert\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940259 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-etc-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940288 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-var-lib-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940303 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-bin\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940316 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-systemd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940339 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6h6\" (UniqueName: \"kubernetes.io/projected/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-kube-api-access-sx6h6\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940359 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-netns\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940377 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-slash\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940405 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940419 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-env-overrides\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940439 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-netd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-log-socket\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940469 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-systemd-units\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940552 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940581 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-ovn\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940608 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-node-log\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041003 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041053 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041106 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket" (OuterVolumeSpecName: "log-socket") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041125 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log" (OuterVolumeSpecName: "node-log") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041235 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041620 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041644 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041692 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041717 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041732 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041735 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041854 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042196 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042328 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042362 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042383 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042410 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042433 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042450 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042470 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042473 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042503 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042583 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042601 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042629 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042652 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042691 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042785 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042813 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash" (OuterVolumeSpecName: "host-slash") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042835 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042818 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042869 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042877 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042876 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-netd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042898 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042917 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-netd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042918 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-log-socket\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-log-socket\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042962 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-systemd-units\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042998 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-ovn\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043002 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043017 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043030 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-systemd-units\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043048 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-node-log\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043057 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-ovn\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043077 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043097 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-kubelet\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043120 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-kubelet\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043104 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-node-log\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043140 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-config\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043168 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-script-lib\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovn-node-metrics-cert\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043241 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-etc-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043280 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-var-lib-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043304 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-bin\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-systemd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043365 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx6h6\" (UniqueName: \"kubernetes.io/projected/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-kube-api-access-sx6h6\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043402 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-netns\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043431 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043454 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-slash\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043512 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043550 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-env-overrides\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043616 4949 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043629 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043641 4949 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043653 4949 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043664 4949 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043675 4949 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043686 4949 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043696 4949 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043707 4949 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043717 4949 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043729 4949 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043740 4949 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043749 4949 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043758 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043766 4949 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043774 4949 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043783 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043846 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-config\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043887 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-systemd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044186 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-env-overrides\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-script-lib\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044463 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-netns\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044492 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044525 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-var-lib-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044528 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-slash\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044560 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-bin\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044544 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044582 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-etc-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.046789 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb" (OuterVolumeSpecName: "kube-api-access-z9cmb") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "kube-api-access-z9cmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.047406 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.047993 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovn-node-metrics-cert\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.054960 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.060016 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx6h6\" (UniqueName: \"kubernetes.io/projected/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-kube-api-access-sx6h6\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.126737 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/2.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.127685 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/1.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.127727 4949 generic.go:334] "Generic (PLEG): container finished" podID="3ac16078-f295-4f4b-875c-a8505e87b9da" containerID="8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470" exitCode=2 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.127820 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerDied","Data":"8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.127873 4949 scope.go:117] "RemoveContainer" containerID="2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.128334 4949 scope.go:117] "RemoveContainer" containerID="8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.128484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2szcd_openshift-multus(3ac16078-f295-4f4b-875c-a8505e87b9da)\"" pod="openshift-multus/multus-2szcd" podUID="3ac16078-f295-4f4b-875c-a8505e87b9da" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.131243 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.146740 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovn-acl-logging/0.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.147980 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovn-controller/0.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.148042 4949 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.148071 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.148085 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149634 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149664 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149671 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149678 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149685 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149691 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149697 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" exitCode=143 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149707 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" exitCode=143 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149737 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149793 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149805 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149752 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149834 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149909 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149915 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149921 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149926 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149931 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149935 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149940 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149945 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149950 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149957 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149965 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149973 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149978 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149984 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149989 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149998 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150003 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150009 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150014 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150019 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150026 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150033 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150040 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150046 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150051 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150056 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150061 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150066 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150071 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150076 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150081 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150087 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150095 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150107 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150114 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150119 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150125 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150130 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150135 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150140 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150145 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150150 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.172625 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.191209 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6zd5"] Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.195948 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.197345 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6zd5"] Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.212108 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.224278 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.235252 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.245272 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.255109 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.259225 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.273200 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: W0120 15:00:12.280995 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a3b80e_47e3_4bd6_8f47_7160cb0ce59a.slice/crio-de7c111f054ef0c79ba48d5ac80613b123268feccb3d5f627a608cb4b8450d96 WatchSource:0}: Error finding container de7c111f054ef0c79ba48d5ac80613b123268feccb3d5f627a608cb4b8450d96: Status 404 returned error can't find the container with id de7c111f054ef0c79ba48d5ac80613b123268feccb3d5f627a608cb4b8450d96 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.286676 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.306907 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321048 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.321535 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321566 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} err="failed to get container status \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321585 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.321862 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321908 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} err="failed to get container status \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321942 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.322306 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.322351 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} err="failed to get container status \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.322380 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.322817 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.322842 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} err="failed to get container status \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.322862 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.323108 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323126 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} err="failed to get container status \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323137 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.323363 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323380 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} err="failed to get container status \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323391 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.323727 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323746 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} err="failed to get container status \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323758 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.324074 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324098 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} err="failed to get container status \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324114 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.324373 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324395 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} err="failed to get container status \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324410 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.324646 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324679 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} err="failed to get container status \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324696 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325014 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} err="failed to get container status \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325040 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325323 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} err="failed to get container status \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325348 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325754 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} err="failed to get container status \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325775 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.326014 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} err="failed to get container status \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.326089 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.326801 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} err="failed to get container status \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.326835 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327059 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} err="failed to get container status \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327076 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327394 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} err="failed to get container status \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327407 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327705 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} err="failed to get container status \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327733 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328035 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} err="failed to get container status \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328057 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328367 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} err="failed to get container status \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328394 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328920 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} err="failed to get container status \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328939 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.330430 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} err="failed to get container status \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.330455 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.330753 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} err="failed to get container status \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.330770 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.331100 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.331787 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} err="failed to get container status \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.331823 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332100 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} err="failed to get container status \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332133 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332389 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} err="failed to get container status \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332418 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332696 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} err="failed to get container status \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332718 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332942 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} err="failed to get container status \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332973 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.333169 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} err="failed to get container status \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.333189 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.334443 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} err="failed to get container status \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.334473 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.335449 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} err="failed to get container status \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.335481 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.335843 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} err="failed to get container status \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.335896 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336141 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} err="failed to get container status \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336164 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336379 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} err="failed to get container status \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336413 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336642 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} err="failed to get container status \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336664 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336881 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} err="failed to get container status \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336910 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.337101 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} err="failed to get container status \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.337130 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.337912 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} err="failed to get container status \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.337936 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.338151 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} err="failed to get container status \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.338172 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.338311 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} err="failed to get container status \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.796815 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" path="/var/lib/kubelet/pods/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/volumes" Jan 20 15:00:13 crc kubenswrapper[4949]: I0120 15:00:13.169225 4949 generic.go:334] "Generic (PLEG): container finished" podID="64a3b80e-47e3-4bd6-8f47-7160cb0ce59a" containerID="d654df5c84f5b2ba92addf90bcf5db7b22a94ee32472c1967777228a107239a5" exitCode=0 Jan 20 15:00:13 crc kubenswrapper[4949]: I0120 15:00:13.169338 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerDied","Data":"d654df5c84f5b2ba92addf90bcf5db7b22a94ee32472c1967777228a107239a5"} Jan 20 15:00:13 crc kubenswrapper[4949]: I0120 15:00:13.172642 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"de7c111f054ef0c79ba48d5ac80613b123268feccb3d5f627a608cb4b8450d96"} Jan 20 15:00:13 crc kubenswrapper[4949]: I0120 15:00:13.173878 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/2.log" Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185239 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"75545bf5c5d489625d6146bc8bc5966b32a56e4ebee01f914356c6c5f29fb55f"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185592 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"d5a17be3922c3109b28464d39203204a98bae74e4b68c2432360d34ea83a712b"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185609 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"55612c2f7ee955d25515e5f2b52fd81d49cf35c205a2291122cff2fb3776dccc"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185623 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"8f57cff25330b0b8ce4cdecb03ff1186e2126efdc56965070a3ad99ac8edd72c"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185634 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"7cd650029e29248c26efe1acd4969723fab1106a7a59cc2d2e365c21eca6fecb"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185644 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"c7e96f449c72106e35be484c9382fcba51defce2274710087da32d7339ae7e1e"} Jan 20 15:00:16 crc kubenswrapper[4949]: I0120 15:00:16.198569 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"95ba337532e1704dbee06ce9dd09e1bcea1a7c62fa214acd28762ad0901cd526"} Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.218629 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"087f068568e694b85285a6c161905a7896010bd0870614500689cf988e5fda07"} Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.218988 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.219006 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.245322 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.255939 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" podStartSLOduration=8.255921811 podStartE2EDuration="8.255921811s" podCreationTimestamp="2026-01-20 15:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:00:19.247489147 +0000 UTC m=+615.057320005" watchObservedRunningTime="2026-01-20 15:00:19.255921811 +0000 UTC m=+615.065752669" Jan 20 15:00:20 crc kubenswrapper[4949]: I0120 15:00:20.231365 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:20 crc kubenswrapper[4949]: I0120 15:00:20.268038 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:22 crc kubenswrapper[4949]: I0120 15:00:22.788739 4949 scope.go:117] "RemoveContainer" containerID="8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470" Jan 20 15:00:22 crc kubenswrapper[4949]: E0120 15:00:22.789856 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2szcd_openshift-multus(3ac16078-f295-4f4b-875c-a8505e87b9da)\"" pod="openshift-multus/multus-2szcd" podUID="3ac16078-f295-4f4b-875c-a8505e87b9da" Jan 20 15:00:37 crc kubenswrapper[4949]: I0120 15:00:37.789666 4949 scope.go:117] "RemoveContainer" containerID="8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470" Jan 20 15:00:38 crc kubenswrapper[4949]: I0120 15:00:38.338362 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/2.log" Jan 20 15:00:38 crc kubenswrapper[4949]: I0120 15:00:38.338726 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"ffabd8ff2e0b25be4ba66141518acd8b6b9068f3e3a92e9fd03df65a83adc54c"} Jan 20 15:00:42 crc kubenswrapper[4949]: I0120 15:00:42.282318 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.089215 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk"] Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.092836 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.094858 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk"] Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.095370 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.170597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.171278 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.171325 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.272904 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.272993 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.273065 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.273470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.273611 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.291622 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.437630 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.635718 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk"] Jan 20 15:00:59 crc kubenswrapper[4949]: I0120 15:00:59.461960 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerID="5b51387318628027a96aa5844ef249d0d94c69dea3e6fbcd48dfb3d440c9ec7c" exitCode=0 Jan 20 15:00:59 crc kubenswrapper[4949]: I0120 15:00:59.462040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerDied","Data":"5b51387318628027a96aa5844ef249d0d94c69dea3e6fbcd48dfb3d440c9ec7c"} Jan 20 15:00:59 crc kubenswrapper[4949]: I0120 15:00:59.462102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerStarted","Data":"dbb172ee0a0e40161087e744b459a19cec475c33b20de78c28ef79c5599e95c9"} Jan 20 15:01:01 crc kubenswrapper[4949]: I0120 15:01:01.475143 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerID="cf9bb695a1350c9ceec799d4f88bacf7b8002989afc6a8e95bff9847a6fc9823" exitCode=0 Jan 20 15:01:01 crc kubenswrapper[4949]: I0120 15:01:01.475204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerDied","Data":"cf9bb695a1350c9ceec799d4f88bacf7b8002989afc6a8e95bff9847a6fc9823"} Jan 20 15:01:02 crc kubenswrapper[4949]: I0120 15:01:02.487237 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerID="5fb640316408983836ba4faaf12805ccea2809df987579f1dbb61aca30eb0631" exitCode=0 Jan 20 15:01:02 crc kubenswrapper[4949]: I0120 15:01:02.487281 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerDied","Data":"5fb640316408983836ba4faaf12805ccea2809df987579f1dbb61aca30eb0631"} Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.756597 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.839049 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") pod \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.839155 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") pod \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.839245 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") pod \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.839757 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle" (OuterVolumeSpecName: "bundle") pod "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" (UID: "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.844349 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm" (OuterVolumeSpecName: "kube-api-access-rpvmm") pod "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" (UID: "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06"). InnerVolumeSpecName "kube-api-access-rpvmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.853415 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util" (OuterVolumeSpecName: "util") pod "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" (UID: "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.940469 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.940545 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.940566 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:04 crc kubenswrapper[4949]: I0120 15:01:04.501718 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerDied","Data":"dbb172ee0a0e40161087e744b459a19cec475c33b20de78c28ef79c5599e95c9"} Jan 20 15:01:04 crc kubenswrapper[4949]: I0120 15:01:04.501767 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:01:04 crc kubenswrapper[4949]: I0120 15:01:04.501780 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbb172ee0a0e40161087e744b459a19cec475c33b20de78c28ef79c5599e95c9" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.776739 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsrwb"] Jan 20 15:01:06 crc kubenswrapper[4949]: E0120 15:01:06.777250 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="extract" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777264 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="extract" Jan 20 15:01:06 crc kubenswrapper[4949]: E0120 15:01:06.777280 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="util" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777286 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="util" Jan 20 15:01:06 crc kubenswrapper[4949]: E0120 15:01:06.777295 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="pull" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777302 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="pull" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777389 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="extract" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777765 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.779837 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.779971 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4gdxk" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.780016 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.798229 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsrwb"] Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.879635 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fnfp\" (UniqueName: \"kubernetes.io/projected/b2bfb1bf-1717-4d51-9632-204856f869f4-kube-api-access-7fnfp\") pod \"nmstate-operator-646758c888-jsrwb\" (UID: \"b2bfb1bf-1717-4d51-9632-204856f869f4\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.981722 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fnfp\" (UniqueName: \"kubernetes.io/projected/b2bfb1bf-1717-4d51-9632-204856f869f4-kube-api-access-7fnfp\") pod \"nmstate-operator-646758c888-jsrwb\" (UID: \"b2bfb1bf-1717-4d51-9632-204856f869f4\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:07 crc kubenswrapper[4949]: I0120 15:01:07.009642 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fnfp\" (UniqueName: \"kubernetes.io/projected/b2bfb1bf-1717-4d51-9632-204856f869f4-kube-api-access-7fnfp\") pod \"nmstate-operator-646758c888-jsrwb\" (UID: \"b2bfb1bf-1717-4d51-9632-204856f869f4\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:07 crc kubenswrapper[4949]: I0120 15:01:07.096283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:07 crc kubenswrapper[4949]: I0120 15:01:07.288069 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsrwb"] Jan 20 15:01:07 crc kubenswrapper[4949]: W0120 15:01:07.297685 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2bfb1bf_1717_4d51_9632_204856f869f4.slice/crio-1aa1c851904fa1ebb402eba7ebf014592070049ec76e8f6a1d333a342a9805d5 WatchSource:0}: Error finding container 1aa1c851904fa1ebb402eba7ebf014592070049ec76e8f6a1d333a342a9805d5: Status 404 returned error can't find the container with id 1aa1c851904fa1ebb402eba7ebf014592070049ec76e8f6a1d333a342a9805d5 Jan 20 15:01:07 crc kubenswrapper[4949]: I0120 15:01:07.516686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" event={"ID":"b2bfb1bf-1717-4d51-9632-204856f869f4","Type":"ContainerStarted","Data":"1aa1c851904fa1ebb402eba7ebf014592070049ec76e8f6a1d333a342a9805d5"} Jan 20 15:01:10 crc kubenswrapper[4949]: I0120 15:01:10.540090 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" event={"ID":"b2bfb1bf-1717-4d51-9632-204856f869f4","Type":"ContainerStarted","Data":"bc4f499c09fee86ecd739660816ce7aab9d3965845fc5a784be91cb8045556ec"} Jan 20 15:01:10 crc kubenswrapper[4949]: I0120 15:01:10.559624 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" podStartSLOduration=1.954863482 podStartE2EDuration="4.559607298s" podCreationTimestamp="2026-01-20 15:01:06 +0000 UTC" firstStartedPulling="2026-01-20 15:01:07.300292023 +0000 UTC m=+663.110122881" lastFinishedPulling="2026-01-20 15:01:09.905035839 +0000 UTC m=+665.714866697" observedRunningTime="2026-01-20 15:01:10.55682702 +0000 UTC m=+666.366657878" watchObservedRunningTime="2026-01-20 15:01:10.559607298 +0000 UTC m=+666.369438156" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.527278 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bz62x"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.528126 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.529769 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jvqm8" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.542898 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.543954 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.548883 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.554084 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bz62x"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.567584 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ndwpd"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.568429 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.573033 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652250 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-nmstate-lock\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652307 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8gb\" (UniqueName: \"kubernetes.io/projected/248f6a09-0064-4d9f-a4d7-13a92b06ee72-kube-api-access-fr8gb\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652337 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-ovs-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652375 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75q4\" (UniqueName: \"kubernetes.io/projected/71837cd3-c24a-4d86-b59f-28330f7d2809-kube-api-access-h75q4\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652417 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652443 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m76vj\" (UniqueName: \"kubernetes.io/projected/696be671-724b-4447-ba02-730dd10fc489-kube-api-access-m76vj\") pod \"nmstate-metrics-54757c584b-bz62x\" (UID: \"696be671-724b-4447-ba02-730dd10fc489\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652464 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-dbus-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.660191 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.660838 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.664903 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.664923 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.665238 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bsvwp" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.680031 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753234 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8gb\" (UniqueName: \"kubernetes.io/projected/248f6a09-0064-4d9f-a4d7-13a92b06ee72-kube-api-access-fr8gb\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-ovs-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753300 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a366383-883e-4f7e-b656-d23eb0fe6294-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753331 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h75q4\" (UniqueName: \"kubernetes.io/projected/71837cd3-c24a-4d86-b59f-28330f7d2809-kube-api-access-h75q4\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bt7v\" (UniqueName: \"kubernetes.io/projected/7a366383-883e-4f7e-b656-d23eb0fe6294-kube-api-access-5bt7v\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753399 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m76vj\" (UniqueName: \"kubernetes.io/projected/696be671-724b-4447-ba02-730dd10fc489-kube-api-access-m76vj\") pod \"nmstate-metrics-54757c584b-bz62x\" (UID: \"696be671-724b-4447-ba02-730dd10fc489\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753419 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-dbus-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753459 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a366383-883e-4f7e-b656-d23eb0fe6294-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753479 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-nmstate-lock\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753562 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-nmstate-lock\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: E0120 15:01:11.753604 4949 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 20 15:01:11 crc kubenswrapper[4949]: E0120 15:01:11.753671 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair podName:71837cd3-c24a-4d86-b59f-28330f7d2809 nodeName:}" failed. No retries permitted until 2026-01-20 15:01:12.253649786 +0000 UTC m=+668.063480654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-twsz5" (UID: "71837cd3-c24a-4d86-b59f-28330f7d2809") : secret "openshift-nmstate-webhook" not found Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753913 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-ovs-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.754135 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-dbus-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.777297 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m76vj\" (UniqueName: \"kubernetes.io/projected/696be671-724b-4447-ba02-730dd10fc489-kube-api-access-m76vj\") pod \"nmstate-metrics-54757c584b-bz62x\" (UID: \"696be671-724b-4447-ba02-730dd10fc489\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.785247 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h75q4\" (UniqueName: \"kubernetes.io/projected/71837cd3-c24a-4d86-b59f-28330f7d2809-kube-api-access-h75q4\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.789238 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8gb\" (UniqueName: \"kubernetes.io/projected/248f6a09-0064-4d9f-a4d7-13a92b06ee72-kube-api-access-fr8gb\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.846329 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.846406 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bbcc9b596-78qpx"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.847280 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.854719 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bt7v\" (UniqueName: \"kubernetes.io/projected/7a366383-883e-4f7e-b656-d23eb0fe6294-kube-api-access-5bt7v\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.855167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a366383-883e-4f7e-b656-d23eb0fe6294-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.855214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a366383-883e-4f7e-b656-d23eb0fe6294-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.856192 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a366383-883e-4f7e-b656-d23eb0fe6294-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.873436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a366383-883e-4f7e-b656-d23eb0fe6294-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.880765 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bt7v\" (UniqueName: \"kubernetes.io/projected/7a366383-883e-4f7e-b656-d23eb0fe6294-kube-api-access-5bt7v\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.888254 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.916649 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bbcc9b596-78qpx"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956338 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-trusted-ca-bundle\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956377 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872g9\" (UniqueName: \"kubernetes.io/projected/21f26abc-c431-4136-94c2-af8e66f624a3-kube-api-access-872g9\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956423 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-oauth-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956550 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-service-ca\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956574 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-console-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956590 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-oauth-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956616 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.975025 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.047966 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bz62x"] Jan 20 15:01:12 crc kubenswrapper[4949]: W0120 15:01:12.057710 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696be671_724b_4447_ba02_730dd10fc489.slice/crio-1ccdb7fb4f1222fec0e1cb0e324e4133e519a01fa97406622612ac0480d1db12 WatchSource:0}: Error finding container 1ccdb7fb4f1222fec0e1cb0e324e4133e519a01fa97406622612ac0480d1db12: Status 404 returned error can't find the container with id 1ccdb7fb4f1222fec0e1cb0e324e4133e519a01fa97406622612ac0480d1db12 Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.057895 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872g9\" (UniqueName: \"kubernetes.io/projected/21f26abc-c431-4136-94c2-af8e66f624a3-kube-api-access-872g9\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.058000 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-oauth-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.058076 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-service-ca\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.059260 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-service-ca\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.059845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-console-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.059864 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-oauth-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.059880 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-oauth-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.060016 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.060081 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-trusted-ca-bundle\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.060948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-trusted-ca-bundle\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.061552 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-console-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.064160 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.065344 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-oauth-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.074892 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872g9\" (UniqueName: \"kubernetes.io/projected/21f26abc-c431-4136-94c2-af8e66f624a3-kube-api-access-872g9\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: W0120 15:01:12.151184 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a366383_883e_4f7e_b656_d23eb0fe6294.slice/crio-bcf1f6dfd2dce655044a4cf4df53abc87fbc8f4f5383579576c232b102ef8940 WatchSource:0}: Error finding container bcf1f6dfd2dce655044a4cf4df53abc87fbc8f4f5383579576c232b102ef8940: Status 404 returned error can't find the container with id bcf1f6dfd2dce655044a4cf4df53abc87fbc8f4f5383579576c232b102ef8940 Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.151943 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng"] Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.216785 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.262153 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.268612 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.434702 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bbcc9b596-78qpx"] Jan 20 15:01:12 crc kubenswrapper[4949]: W0120 15:01:12.437491 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f26abc_c431_4136_94c2_af8e66f624a3.slice/crio-98746eff6ddc2f57c6cd4c945a6151e2972edd97d9fec90eabe49f6de2878e1f WatchSource:0}: Error finding container 98746eff6ddc2f57c6cd4c945a6151e2972edd97d9fec90eabe49f6de2878e1f: Status 404 returned error can't find the container with id 98746eff6ddc2f57c6cd4c945a6151e2972edd97d9fec90eabe49f6de2878e1f Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.471841 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.556208 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcc9b596-78qpx" event={"ID":"21f26abc-c431-4136-94c2-af8e66f624a3","Type":"ContainerStarted","Data":"98746eff6ddc2f57c6cd4c945a6151e2972edd97d9fec90eabe49f6de2878e1f"} Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.557151 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ndwpd" event={"ID":"248f6a09-0064-4d9f-a4d7-13a92b06ee72","Type":"ContainerStarted","Data":"0d886b19e97a080a5a84ba7d91d52cf62c368da8a2e4033a66a6800cba5d3d64"} Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.558024 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" event={"ID":"7a366383-883e-4f7e-b656-d23eb0fe6294","Type":"ContainerStarted","Data":"bcf1f6dfd2dce655044a4cf4df53abc87fbc8f4f5383579576c232b102ef8940"} Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.558867 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" event={"ID":"696be671-724b-4447-ba02-730dd10fc489","Type":"ContainerStarted","Data":"1ccdb7fb4f1222fec0e1cb0e324e4133e519a01fa97406622612ac0480d1db12"} Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.649164 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5"] Jan 20 15:01:13 crc kubenswrapper[4949]: I0120 15:01:13.568829 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcc9b596-78qpx" event={"ID":"21f26abc-c431-4136-94c2-af8e66f624a3","Type":"ContainerStarted","Data":"4511b802b4c91c97ca61c7e9f49a532767b00c330949bb672dce5f0421c45aa6"} Jan 20 15:01:13 crc kubenswrapper[4949]: I0120 15:01:13.570355 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" event={"ID":"71837cd3-c24a-4d86-b59f-28330f7d2809","Type":"ContainerStarted","Data":"7c9c29bf669f95ba397dd954775c84e5b489f9a129c32fd6a62d3af40eaae06b"} Jan 20 15:01:13 crc kubenswrapper[4949]: I0120 15:01:13.591448 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bbcc9b596-78qpx" podStartSLOduration=2.5914033869999997 podStartE2EDuration="2.591403387s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:01:13.589355032 +0000 UTC m=+669.399185900" watchObservedRunningTime="2026-01-20 15:01:13.591403387 +0000 UTC m=+669.401234245" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.582204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" event={"ID":"7a366383-883e-4f7e-b656-d23eb0fe6294","Type":"ContainerStarted","Data":"b45ee6437aaa11af35ef9a8e6859327c64e1e615bd02af5f249bd3676dbb9a9c"} Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.585035 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" event={"ID":"696be671-724b-4447-ba02-730dd10fc489","Type":"ContainerStarted","Data":"59e9842866e46aee0c354e986556ee76d2c048e1085726f48092e4e184d33f64"} Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.586054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" event={"ID":"71837cd3-c24a-4d86-b59f-28330f7d2809","Type":"ContainerStarted","Data":"146316af6a9ed3c54fca36d5fd02f5dde1a1f38bde36279909e6566bf3f0fbe4"} Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.586411 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.587964 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ndwpd" event={"ID":"248f6a09-0064-4d9f-a4d7-13a92b06ee72","Type":"ContainerStarted","Data":"c8cb9df19580dc5e2aa20327e006450d2f1bc93c11a4be8004a8d50994197f92"} Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.588373 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.602007 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" podStartSLOduration=2.206048656 podStartE2EDuration="4.601977719s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="2026-01-20 15:01:12.154154177 +0000 UTC m=+667.963985035" lastFinishedPulling="2026-01-20 15:01:14.55008324 +0000 UTC m=+670.359914098" observedRunningTime="2026-01-20 15:01:15.599142769 +0000 UTC m=+671.408973627" watchObservedRunningTime="2026-01-20 15:01:15.601977719 +0000 UTC m=+671.411808627" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.615937 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" podStartSLOduration=2.712243099 podStartE2EDuration="4.615920051s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="2026-01-20 15:01:12.651864511 +0000 UTC m=+668.461695369" lastFinishedPulling="2026-01-20 15:01:14.555541423 +0000 UTC m=+670.365372321" observedRunningTime="2026-01-20 15:01:15.614022171 +0000 UTC m=+671.423853029" watchObservedRunningTime="2026-01-20 15:01:15.615920051 +0000 UTC m=+671.425750919" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.630510 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ndwpd" podStartSLOduration=1.987427292 podStartE2EDuration="4.630493794s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="2026-01-20 15:01:11.914792335 +0000 UTC m=+667.724623193" lastFinishedPulling="2026-01-20 15:01:14.557858837 +0000 UTC m=+670.367689695" observedRunningTime="2026-01-20 15:01:15.628297724 +0000 UTC m=+671.438128762" watchObservedRunningTime="2026-01-20 15:01:15.630493794 +0000 UTC m=+671.440324652" Jan 20 15:01:17 crc kubenswrapper[4949]: I0120 15:01:17.601394 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" event={"ID":"696be671-724b-4447-ba02-730dd10fc489","Type":"ContainerStarted","Data":"43732728b63cf52bad6242f99b19d88f3170665a888d7cd6f11991e3f738aa68"} Jan 20 15:01:21 crc kubenswrapper[4949]: I0120 15:01:21.921870 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:21 crc kubenswrapper[4949]: I0120 15:01:21.944680 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" podStartSLOduration=6.090691673 podStartE2EDuration="10.944664239s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="2026-01-20 15:01:12.060329841 +0000 UTC m=+667.870160699" lastFinishedPulling="2026-01-20 15:01:16.914302407 +0000 UTC m=+672.724133265" observedRunningTime="2026-01-20 15:01:17.626623728 +0000 UTC m=+673.436454586" watchObservedRunningTime="2026-01-20 15:01:21.944664239 +0000 UTC m=+677.754495097" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.218178 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.218892 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.226132 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.640407 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.713083 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 15:01:32 crc kubenswrapper[4949]: I0120 15:01:32.481559 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.112199 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr"] Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.113997 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.116169 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.124090 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr"] Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.255094 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.255136 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.255424 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.356792 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.356889 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.356912 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.357412 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.357745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.382422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.429440 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.682716 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr"] Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.779995 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerStarted","Data":"8ef34c083c82a0a0741f808e34b832498cc30623690d5a9580192e2ff2002181"} Jan 20 15:01:47 crc kubenswrapper[4949]: I0120 15:01:47.775384 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-w9d9r" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" containerID="cri-o://203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" gracePeriod=15 Jan 20 15:01:47 crc kubenswrapper[4949]: I0120 15:01:47.788189 4949 generic.go:334] "Generic (PLEG): container finished" podID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerID="4957e0b30f586ade878532ac3515259b4d4e851e6da96f31c3fec1bd774823ed" exitCode=0 Jan 20 15:01:47 crc kubenswrapper[4949]: I0120 15:01:47.788245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerDied","Data":"4957e0b30f586ade878532ac3515259b4d4e851e6da96f31c3fec1bd774823ed"} Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.222028 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w9d9r_37539dae-2103-4b6c-871c-48b0c35a1850/console/0.log" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.222274 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386006 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386118 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386157 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386213 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386317 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.387475 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.387542 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.387796 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config" (OuterVolumeSpecName: "console-config") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388432 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca" (OuterVolumeSpecName: "service-ca") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388507 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388838 4949 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388866 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388492 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.390980 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.391008 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.391394 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp" (OuterVolumeSpecName: "kube-api-access-kcfsp") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "kube-api-access-kcfsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490309 4949 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490557 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490667 4949 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490751 4949 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490824 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.797117 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w9d9r_37539dae-2103-4b6c-871c-48b0c35a1850/console/0.log" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.797170 4949 generic.go:334] "Generic (PLEG): container finished" podID="37539dae-2103-4b6c-871c-48b0c35a1850" containerID="203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" exitCode=2 Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.797236 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.805946 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w9d9r" event={"ID":"37539dae-2103-4b6c-871c-48b0c35a1850","Type":"ContainerDied","Data":"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721"} Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.805981 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w9d9r" event={"ID":"37539dae-2103-4b6c-871c-48b0c35a1850","Type":"ContainerDied","Data":"f4877eaf97bd7c4d0e52e4fddc8cae7a451b37b3fd251230d5ececd8dac1c70e"} Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.806000 4949 scope.go:117] "RemoveContainer" containerID="203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.830714 4949 scope.go:117] "RemoveContainer" containerID="203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" Jan 20 15:01:48 crc kubenswrapper[4949]: E0120 15:01:48.831573 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721\": container with ID starting with 203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721 not found: ID does not exist" containerID="203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.831663 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721"} err="failed to get container status \"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721\": rpc error: code = NotFound desc = could not find container \"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721\": container with ID starting with 203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721 not found: ID does not exist" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.844343 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.848672 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 15:01:49 crc kubenswrapper[4949]: I0120 15:01:49.811867 4949 generic.go:334] "Generic (PLEG): container finished" podID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerID="4c4cabaf73dfcbd9c156986045d5cf50fe6169932361f99b8511714d0215e960" exitCode=0 Jan 20 15:01:49 crc kubenswrapper[4949]: I0120 15:01:49.811958 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerDied","Data":"4c4cabaf73dfcbd9c156986045d5cf50fe6169932361f99b8511714d0215e960"} Jan 20 15:01:50 crc kubenswrapper[4949]: I0120 15:01:50.798556 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" path="/var/lib/kubelet/pods/37539dae-2103-4b6c-871c-48b0c35a1850/volumes" Jan 20 15:01:50 crc kubenswrapper[4949]: I0120 15:01:50.824044 4949 generic.go:334] "Generic (PLEG): container finished" podID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerID="21db6d2d22952f7c3040634cd81293adc0ed863a2ba69becd94d1c5a829477cb" exitCode=0 Jan 20 15:01:50 crc kubenswrapper[4949]: I0120 15:01:50.824093 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerDied","Data":"21db6d2d22952f7c3040634cd81293adc0ed863a2ba69becd94d1c5a829477cb"} Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.093415 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.251508 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") pod \"21202f95-d312-47b4-988f-4cd0a9dac08e\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.251582 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") pod \"21202f95-d312-47b4-988f-4cd0a9dac08e\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.251728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") pod \"21202f95-d312-47b4-988f-4cd0a9dac08e\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.253179 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle" (OuterVolumeSpecName: "bundle") pod "21202f95-d312-47b4-988f-4cd0a9dac08e" (UID: "21202f95-d312-47b4-988f-4cd0a9dac08e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.264455 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv" (OuterVolumeSpecName: "kube-api-access-gtcvv") pod "21202f95-d312-47b4-988f-4cd0a9dac08e" (UID: "21202f95-d312-47b4-988f-4cd0a9dac08e"). InnerVolumeSpecName "kube-api-access-gtcvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.272559 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util" (OuterVolumeSpecName: "util") pod "21202f95-d312-47b4-988f-4cd0a9dac08e" (UID: "21202f95-d312-47b4-988f-4cd0a9dac08e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.353370 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.353407 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.353420 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.844328 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerDied","Data":"8ef34c083c82a0a0741f808e34b832498cc30623690d5a9580192e2ff2002181"} Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.844377 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef34c083c82a0a0741f808e34b832498cc30623690d5a9580192e2ff2002181" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.844383 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:57 crc kubenswrapper[4949]: I0120 15:01:57.152265 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:01:57 crc kubenswrapper[4949]: I0120 15:01:57.152882 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.491987 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl"] Jan 20 15:02:01 crc kubenswrapper[4949]: E0120 15:02:01.492584 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="extract" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492599 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="extract" Jan 20 15:02:01 crc kubenswrapper[4949]: E0120 15:02:01.492618 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="pull" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492627 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="pull" Jan 20 15:02:01 crc kubenswrapper[4949]: E0120 15:02:01.492640 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="util" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492649 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="util" Jan 20 15:02:01 crc kubenswrapper[4949]: E0120 15:02:01.492662 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492669 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492799 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492813 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="extract" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.493272 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.495088 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.495832 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.495840 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.497334 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.499367 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2ntrl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.505371 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl"] Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.571381 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-webhook-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.571431 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/aab28d03-013d-4f55-8f5d-4452aa51ae0b-kube-api-access-9mvd6\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.571466 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-apiservice-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.672890 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-webhook-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.672938 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/aab28d03-013d-4f55-8f5d-4452aa51ae0b-kube-api-access-9mvd6\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.672971 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-apiservice-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.678407 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-apiservice-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.678886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-webhook-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.696646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/aab28d03-013d-4f55-8f5d-4452aa51ae0b-kube-api-access-9mvd6\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.748888 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm"] Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.749710 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.755153 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.755192 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g658z" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.755249 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.763810 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm"] Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.810795 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.875777 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-apiservice-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.876297 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-webhook-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.876434 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/418359eb-1dea-4f02-9964-9ab810e3bc09-kube-api-access-2kkrk\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.977346 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-apiservice-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.977399 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-webhook-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.977426 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/418359eb-1dea-4f02-9964-9ab810e3bc09-kube-api-access-2kkrk\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.992441 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-webhook-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.992885 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-apiservice-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.996155 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/418359eb-1dea-4f02-9964-9ab810e3bc09-kube-api-access-2kkrk\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.065916 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.076593 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl"] Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.483496 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm"] Jan 20 15:02:02 crc kubenswrapper[4949]: W0120 15:02:02.491683 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418359eb_1dea_4f02_9964_9ab810e3bc09.slice/crio-b678d55ba06d84513bd8ee56bd71f5ea53df1d15e8ac9b3ae06b57ac9be806ce WatchSource:0}: Error finding container b678d55ba06d84513bd8ee56bd71f5ea53df1d15e8ac9b3ae06b57ac9be806ce: Status 404 returned error can't find the container with id b678d55ba06d84513bd8ee56bd71f5ea53df1d15e8ac9b3ae06b57ac9be806ce Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.913439 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" event={"ID":"aab28d03-013d-4f55-8f5d-4452aa51ae0b","Type":"ContainerStarted","Data":"2da35e27e23c661d374a95e2f369f88ff8ad4b5b61794dc6ef136f820018cb7b"} Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.914511 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" event={"ID":"418359eb-1dea-4f02-9964-9ab810e3bc09","Type":"ContainerStarted","Data":"b678d55ba06d84513bd8ee56bd71f5ea53df1d15e8ac9b3ae06b57ac9be806ce"} Jan 20 15:02:05 crc kubenswrapper[4949]: I0120 15:02:05.931807 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" event={"ID":"aab28d03-013d-4f55-8f5d-4452aa51ae0b","Type":"ContainerStarted","Data":"e779d556b8a5d0b8149abc656c5078b13a7ae8b898c2e76d33f7473f0dc59c56"} Jan 20 15:02:05 crc kubenswrapper[4949]: I0120 15:02:05.932777 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:05 crc kubenswrapper[4949]: I0120 15:02:05.955347 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" podStartSLOduration=1.655204922 podStartE2EDuration="4.95532863s" podCreationTimestamp="2026-01-20 15:02:01 +0000 UTC" firstStartedPulling="2026-01-20 15:02:02.097007239 +0000 UTC m=+717.906838097" lastFinishedPulling="2026-01-20 15:02:05.397130947 +0000 UTC m=+721.206961805" observedRunningTime="2026-01-20 15:02:05.953696459 +0000 UTC m=+721.763527337" watchObservedRunningTime="2026-01-20 15:02:05.95532863 +0000 UTC m=+721.765159508" Jan 20 15:02:09 crc kubenswrapper[4949]: I0120 15:02:09.958363 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" event={"ID":"418359eb-1dea-4f02-9964-9ab810e3bc09","Type":"ContainerStarted","Data":"9dd1923c007087a728220d7a810a22bddb0350ec484e36c06558c22b3ddb24ff"} Jan 20 15:02:09 crc kubenswrapper[4949]: I0120 15:02:09.959174 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:09 crc kubenswrapper[4949]: I0120 15:02:09.983116 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" podStartSLOduration=1.962679622 podStartE2EDuration="8.983099814s" podCreationTimestamp="2026-01-20 15:02:01 +0000 UTC" firstStartedPulling="2026-01-20 15:02:02.494564307 +0000 UTC m=+718.304395165" lastFinishedPulling="2026-01-20 15:02:09.514984499 +0000 UTC m=+725.324815357" observedRunningTime="2026-01-20 15:02:09.978605932 +0000 UTC m=+725.788436790" watchObservedRunningTime="2026-01-20 15:02:09.983099814 +0000 UTC m=+725.792930672" Jan 20 15:02:22 crc kubenswrapper[4949]: I0120 15:02:22.071236 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:27 crc kubenswrapper[4949]: I0120 15:02:27.152212 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:02:27 crc kubenswrapper[4949]: I0120 15:02:27.152879 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:02:37 crc kubenswrapper[4949]: I0120 15:02:37.956875 4949 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 15:02:41 crc kubenswrapper[4949]: I0120 15:02:41.814117 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.641532 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.642400 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.644914 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.648993 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gdn5t" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.659173 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hg78r"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.661565 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.663760 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.664015 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.666437 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698270 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-sockets\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698344 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9787b339-5a35-4568-8ea4-12b8904efd8a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698386 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-conf\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics-certs\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698462 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-startup\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698490 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698549 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2fq\" (UniqueName: \"kubernetes.io/projected/a4a159b5-92c1-4221-9b9d-ef46eda1afca-kube-api-access-2h2fq\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698584 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58c6\" (UniqueName: \"kubernetes.io/projected/9787b339-5a35-4568-8ea4-12b8904efd8a-kube-api-access-f58c6\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698615 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-reloader\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.731606 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-znbk6"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.732826 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.735200 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.735604 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dkdfl" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.735989 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.738943 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.753036 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-n6txw"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.753880 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.757057 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.773986 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-n6txw"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-cert\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799849 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-sockets\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799877 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9787b339-5a35-4568-8ea4-12b8904efd8a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799916 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-conf\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799943 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799983 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics-certs\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8rl\" (UniqueName: \"kubernetes.io/projected/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-kube-api-access-rr8rl\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800044 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-startup\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800069 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-metrics-certs\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800089 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800105 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2fq\" (UniqueName: \"kubernetes.io/projected/a4a159b5-92c1-4221-9b9d-ef46eda1afca-kube-api-access-2h2fq\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800154 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e00f603c-93d1-4941-908a-26fdf24da7b7-metallb-excludel2\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800190 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f58c6\" (UniqueName: \"kubernetes.io/projected/9787b339-5a35-4568-8ea4-12b8904efd8a-kube-api-access-f58c6\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jscpg\" (UniqueName: \"kubernetes.io/projected/e00f603c-93d1-4941-908a-26fdf24da7b7-kube-api-access-jscpg\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800229 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-reloader\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800348 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-sockets\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-conf\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800845 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-reloader\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.801102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.801902 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-startup\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.806793 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9787b339-5a35-4568-8ea4-12b8904efd8a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.816830 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics-certs\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.817318 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2fq\" (UniqueName: \"kubernetes.io/projected/a4a159b5-92c1-4221-9b9d-ef46eda1afca-kube-api-access-2h2fq\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.843589 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f58c6\" (UniqueName: \"kubernetes.io/projected/9787b339-5a35-4568-8ea4-12b8904efd8a-kube-api-access-f58c6\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.900867 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr8rl\" (UniqueName: \"kubernetes.io/projected/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-kube-api-access-rr8rl\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.900926 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-metrics-certs\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.900957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.900986 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e00f603c-93d1-4941-908a-26fdf24da7b7-metallb-excludel2\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.901002 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jscpg\" (UniqueName: \"kubernetes.io/projected/e00f603c-93d1-4941-908a-26fdf24da7b7-kube-api-access-jscpg\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.901025 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-cert\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.901060 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: E0120 15:02:42.901143 4949 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 20 15:02:42 crc kubenswrapper[4949]: E0120 15:02:42.901190 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist podName:e00f603c-93d1-4941-908a-26fdf24da7b7 nodeName:}" failed. No retries permitted until 2026-01-20 15:02:43.401175421 +0000 UTC m=+759.211006279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist") pod "speaker-znbk6" (UID: "e00f603c-93d1-4941-908a-26fdf24da7b7") : secret "metallb-memberlist" not found Jan 20 15:02:42 crc kubenswrapper[4949]: E0120 15:02:42.901406 4949 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 20 15:02:42 crc kubenswrapper[4949]: E0120 15:02:42.901451 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs podName:e00f603c-93d1-4941-908a-26fdf24da7b7 nodeName:}" failed. No retries permitted until 2026-01-20 15:02:43.40143885 +0000 UTC m=+759.211269708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs") pod "speaker-znbk6" (UID: "e00f603c-93d1-4941-908a-26fdf24da7b7") : secret "speaker-certs-secret" not found Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.902160 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e00f603c-93d1-4941-908a-26fdf24da7b7-metallb-excludel2\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.903962 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.906493 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-metrics-certs\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.915239 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-cert\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.916924 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jscpg\" (UniqueName: \"kubernetes.io/projected/e00f603c-93d1-4941-908a-26fdf24da7b7-kube-api-access-jscpg\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.917772 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr8rl\" (UniqueName: \"kubernetes.io/projected/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-kube-api-access-rr8rl\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.980152 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.989611 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.069109 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.188188 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc"] Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.407222 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.407291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:43 crc kubenswrapper[4949]: E0120 15:02:43.407395 4949 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 20 15:02:43 crc kubenswrapper[4949]: E0120 15:02:43.407459 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist podName:e00f603c-93d1-4941-908a-26fdf24da7b7 nodeName:}" failed. No retries permitted until 2026-01-20 15:02:44.407441347 +0000 UTC m=+760.217272205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist") pod "speaker-znbk6" (UID: "e00f603c-93d1-4941-908a-26fdf24da7b7") : secret "metallb-memberlist" not found Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.413014 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.478983 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-n6txw"] Jan 20 15:02:43 crc kubenswrapper[4949]: W0120 15:02:43.486702 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb76ab7ec_a978_4aea_bc88_b2a82bc54e14.slice/crio-a6f20372f2ee663cb08321684807af692c11f2586ef2edd5dcab79e5ef6d275c WatchSource:0}: Error finding container a6f20372f2ee663cb08321684807af692c11f2586ef2edd5dcab79e5ef6d275c: Status 404 returned error can't find the container with id a6f20372f2ee663cb08321684807af692c11f2586ef2edd5dcab79e5ef6d275c Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.147468 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"1a8ca8ed654a4ed23a7199281dfe8c28f7779bc9de631fa052cff859c4a974d3"} Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.149859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n6txw" event={"ID":"b76ab7ec-a978-4aea-bc88-b2a82bc54e14","Type":"ContainerStarted","Data":"56120539a840af478f8ce0cc422d4d207d2bedcda9ca66873ccbfa87454a7ec1"} Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.149900 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n6txw" event={"ID":"b76ab7ec-a978-4aea-bc88-b2a82bc54e14","Type":"ContainerStarted","Data":"a6f20372f2ee663cb08321684807af692c11f2586ef2edd5dcab79e5ef6d275c"} Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.150846 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" event={"ID":"9787b339-5a35-4568-8ea4-12b8904efd8a","Type":"ContainerStarted","Data":"2ee89685ddd2cbc321aa48c2155cd209678033cfcc14ba66082eacf348a7d80e"} Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.421765 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.427854 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.550430 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-znbk6" Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.159709 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n6txw" event={"ID":"b76ab7ec-a978-4aea-bc88-b2a82bc54e14","Type":"ContainerStarted","Data":"3f75978820f7a48290b98bf141393e817a82264b49e2dbe838b4a6bbd7ad8135"} Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.160555 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.165426 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-znbk6" event={"ID":"e00f603c-93d1-4941-908a-26fdf24da7b7","Type":"ContainerStarted","Data":"638fc6e53b428108a25abf17c10c92723b88acf2678a175224d79a74839c45cd"} Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.165460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-znbk6" event={"ID":"e00f603c-93d1-4941-908a-26fdf24da7b7","Type":"ContainerStarted","Data":"04e92b5666ffcc0b80a5cc2d3b5b7fe5b13b97e93c7ffbd6fefc175c3329e433"} Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.179884 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-n6txw" podStartSLOduration=3.179863456 podStartE2EDuration="3.179863456s" podCreationTimestamp="2026-01-20 15:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:02:45.175114276 +0000 UTC m=+760.984945134" watchObservedRunningTime="2026-01-20 15:02:45.179863456 +0000 UTC m=+760.989694314" Jan 20 15:02:46 crc kubenswrapper[4949]: I0120 15:02:46.176930 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-znbk6" event={"ID":"e00f603c-93d1-4941-908a-26fdf24da7b7","Type":"ContainerStarted","Data":"816749ed41e1d8e97bfe50da9f6c70dd7cb5d977d7c5b218159fca35074ab4b6"} Jan 20 15:02:46 crc kubenswrapper[4949]: I0120 15:02:46.177277 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-znbk6" Jan 20 15:02:46 crc kubenswrapper[4949]: I0120 15:02:46.199670 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-znbk6" podStartSLOduration=4.199648728 podStartE2EDuration="4.199648728s" podCreationTimestamp="2026-01-20 15:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:02:46.194117602 +0000 UTC m=+762.003948480" watchObservedRunningTime="2026-01-20 15:02:46.199648728 +0000 UTC m=+762.009479596" Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.209166 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" event={"ID":"9787b339-5a35-4568-8ea4-12b8904efd8a","Type":"ContainerStarted","Data":"c083be359aebfb79c5be61260cbf497aceb3680ada636dba8f4c8dcd0a8de545"} Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.210534 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.220269 4949 generic.go:334] "Generic (PLEG): container finished" podID="a4a159b5-92c1-4221-9b9d-ef46eda1afca" containerID="aa083f2b10e79b846e6404f8535b0c376e53fe6ab0ee9c1d7b0c15ae24bf62c2" exitCode=0 Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.220347 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerDied","Data":"aa083f2b10e79b846e6404f8535b0c376e53fe6ab0ee9c1d7b0c15ae24bf62c2"} Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.230078 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" podStartSLOduration=1.429542177 podStartE2EDuration="8.230059905s" podCreationTimestamp="2026-01-20 15:02:42 +0000 UTC" firstStartedPulling="2026-01-20 15:02:43.200026299 +0000 UTC m=+759.009857147" lastFinishedPulling="2026-01-20 15:02:50.000544007 +0000 UTC m=+765.810374875" observedRunningTime="2026-01-20 15:02:50.229121596 +0000 UTC m=+766.038952474" watchObservedRunningTime="2026-01-20 15:02:50.230059905 +0000 UTC m=+766.039890763" Jan 20 15:02:51 crc kubenswrapper[4949]: I0120 15:02:51.227110 4949 generic.go:334] "Generic (PLEG): container finished" podID="a4a159b5-92c1-4221-9b9d-ef46eda1afca" containerID="a0b7ba436ec550fe7ed707983b5ee58da70d66a8db6fe84257a0fb6896f0612c" exitCode=0 Jan 20 15:02:51 crc kubenswrapper[4949]: I0120 15:02:51.227203 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerDied","Data":"a0b7ba436ec550fe7ed707983b5ee58da70d66a8db6fe84257a0fb6896f0612c"} Jan 20 15:02:52 crc kubenswrapper[4949]: I0120 15:02:52.233685 4949 generic.go:334] "Generic (PLEG): container finished" podID="a4a159b5-92c1-4221-9b9d-ef46eda1afca" containerID="e149c2e8e0f3a8987f7a3a0e34e5d146c5d9dca17cb635897c080c3fb7825071" exitCode=0 Jan 20 15:02:52 crc kubenswrapper[4949]: I0120 15:02:52.233732 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerDied","Data":"e149c2e8e0f3a8987f7a3a0e34e5d146c5d9dca17cb635897c080c3fb7825071"} Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.074075 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.245927 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"a28c88a7a7cfa1ec5486f63fc72aebda4a3dd1f79cae1002984030ff7b6bfb5d"} Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.246204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"23bc1730eb87179f1dfade96fb381da25b1b652afe5003e60a3fdadfaaf0ad4e"} Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.246215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"fa82ad50fa2d2bbe933c9566d0f3e58fd75dad0479de1af361d6f93827f100e5"} Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.246224 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"94f903c07094965b94f780b75ecf48c575de79738df3e346ad5ac9f709656d31"} Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.257472 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"472453ccb55a102b1a8554bb294b19796fc890a1fd6dfa6006d2390ef8a12e2b"} Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.257549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"690e67cd8810c5695565baec2f3c508d079030e2793d4d26cca191b759698556"} Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.257699 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.280056 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hg78r" podStartSLOduration=5.568454737 podStartE2EDuration="12.280040535s" podCreationTimestamp="2026-01-20 15:02:42 +0000 UTC" firstStartedPulling="2026-01-20 15:02:43.270152633 +0000 UTC m=+759.079983491" lastFinishedPulling="2026-01-20 15:02:49.981738431 +0000 UTC m=+765.791569289" observedRunningTime="2026-01-20 15:02:54.278098713 +0000 UTC m=+770.087929571" watchObservedRunningTime="2026-01-20 15:02:54.280040535 +0000 UTC m=+770.089871393" Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.554118 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-znbk6" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.152795 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.152896 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.152972 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.153900 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.154031 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c" gracePeriod=600 Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.654703 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.655844 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.657368 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lc72c" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.657979 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.660201 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.673181 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.829243 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") pod \"openstack-operator-index-vlncl\" (UID: \"d0e5c180-d948-4012-b87d-f3da18868659\") " pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.930245 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") pod \"openstack-operator-index-vlncl\" (UID: \"d0e5c180-d948-4012-b87d-f3da18868659\") " pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.949321 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") pod \"openstack-operator-index-vlncl\" (UID: \"d0e5c180-d948-4012-b87d-f3da18868659\") " pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.971203 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.990241 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.046100 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.224884 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.294874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vlncl" event={"ID":"d0e5c180-d948-4012-b87d-f3da18868659","Type":"ContainerStarted","Data":"21e6d542117cc9d67c8aa8bc925382b2cf3dc1ec20c1296d73849b576dc55ff3"} Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.297672 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c" exitCode=0 Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.299637 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c"} Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.299702 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf"} Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.299722 4949 scope.go:117] "RemoveContainer" containerID="359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259" Jan 20 15:03:00 crc kubenswrapper[4949]: I0120 15:03:00.326000 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vlncl" event={"ID":"d0e5c180-d948-4012-b87d-f3da18868659","Type":"ContainerStarted","Data":"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de"} Jan 20 15:03:00 crc kubenswrapper[4949]: I0120 15:03:00.351651 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vlncl" podStartSLOduration=1.607974749 podStartE2EDuration="3.351626846s" podCreationTimestamp="2026-01-20 15:02:57 +0000 UTC" firstStartedPulling="2026-01-20 15:02:58.240697521 +0000 UTC m=+774.050528379" lastFinishedPulling="2026-01-20 15:02:59.984349618 +0000 UTC m=+775.794180476" observedRunningTime="2026-01-20 15:03:00.345428399 +0000 UTC m=+776.155259337" watchObservedRunningTime="2026-01-20 15:03:00.351626846 +0000 UTC m=+776.161457714" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.041746 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.642898 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nf5l6"] Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.643668 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.651793 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nf5l6"] Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.737841 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds88k\" (UniqueName: \"kubernetes.io/projected/a06c3c7b-913e-412e-833e-fcd7df154877-kube-api-access-ds88k\") pod \"openstack-operator-index-nf5l6\" (UID: \"a06c3c7b-913e-412e-833e-fcd7df154877\") " pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.839235 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds88k\" (UniqueName: \"kubernetes.io/projected/a06c3c7b-913e-412e-833e-fcd7df154877-kube-api-access-ds88k\") pod \"openstack-operator-index-nf5l6\" (UID: \"a06c3c7b-913e-412e-833e-fcd7df154877\") " pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.864103 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds88k\" (UniqueName: \"kubernetes.io/projected/a06c3c7b-913e-412e-833e-fcd7df154877-kube-api-access-ds88k\") pod \"openstack-operator-index-nf5l6\" (UID: \"a06c3c7b-913e-412e-833e-fcd7df154877\") " pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.957737 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.337091 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vlncl" podUID="d0e5c180-d948-4012-b87d-f3da18868659" containerName="registry-server" containerID="cri-o://a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" gracePeriod=2 Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.388560 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nf5l6"] Jan 20 15:03:02 crc kubenswrapper[4949]: W0120 15:03:02.460392 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda06c3c7b_913e_412e_833e_fcd7df154877.slice/crio-cf1961746bad0bb98719700f82fd84d4f30ac6d181ed687ec60b5125685429f9 WatchSource:0}: Error finding container cf1961746bad0bb98719700f82fd84d4f30ac6d181ed687ec60b5125685429f9: Status 404 returned error can't find the container with id cf1961746bad0bb98719700f82fd84d4f30ac6d181ed687ec60b5125685429f9 Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.657336 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.766347 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") pod \"d0e5c180-d948-4012-b87d-f3da18868659\" (UID: \"d0e5c180-d948-4012-b87d-f3da18868659\") " Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.773349 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml" (OuterVolumeSpecName: "kube-api-access-vtsml") pod "d0e5c180-d948-4012-b87d-f3da18868659" (UID: "d0e5c180-d948-4012-b87d-f3da18868659"). InnerVolumeSpecName "kube-api-access-vtsml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.868161 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") on node \"crc\" DevicePath \"\"" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.985494 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.993553 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345086 4949 generic.go:334] "Generic (PLEG): container finished" podID="d0e5c180-d948-4012-b87d-f3da18868659" containerID="a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" exitCode=0 Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345149 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vlncl" event={"ID":"d0e5c180-d948-4012-b87d-f3da18868659","Type":"ContainerDied","Data":"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de"} Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345468 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vlncl" event={"ID":"d0e5c180-d948-4012-b87d-f3da18868659","Type":"ContainerDied","Data":"21e6d542117cc9d67c8aa8bc925382b2cf3dc1ec20c1296d73849b576dc55ff3"} Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345505 4949 scope.go:117] "RemoveContainer" containerID="a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345168 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.347871 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nf5l6" event={"ID":"a06c3c7b-913e-412e-833e-fcd7df154877","Type":"ContainerStarted","Data":"e88d3aed6e25929dbd093f20fc973583fb5e8d1915ae5ecb926b7c66a2da715c"} Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.347927 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nf5l6" event={"ID":"a06c3c7b-913e-412e-833e-fcd7df154877","Type":"ContainerStarted","Data":"cf1961746bad0bb98719700f82fd84d4f30ac6d181ed687ec60b5125685429f9"} Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.364206 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.370924 4949 scope.go:117] "RemoveContainer" containerID="a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.371212 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:03:03 crc kubenswrapper[4949]: E0120 15:03:03.371444 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de\": container with ID starting with a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de not found: ID does not exist" containerID="a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.371566 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de"} err="failed to get container status \"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de\": rpc error: code = NotFound desc = could not find container \"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de\": container with ID starting with a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de not found: ID does not exist" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.383211 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nf5l6" podStartSLOduration=2.333116699 podStartE2EDuration="2.383188947s" podCreationTimestamp="2026-01-20 15:03:01 +0000 UTC" firstStartedPulling="2026-01-20 15:03:02.464316076 +0000 UTC m=+778.274146934" lastFinishedPulling="2026-01-20 15:03:02.514388324 +0000 UTC m=+778.324219182" observedRunningTime="2026-01-20 15:03:03.375865975 +0000 UTC m=+779.185696833" watchObservedRunningTime="2026-01-20 15:03:03.383188947 +0000 UTC m=+779.193019815" Jan 20 15:03:04 crc kubenswrapper[4949]: I0120 15:03:04.804973 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e5c180-d948-4012-b87d-f3da18868659" path="/var/lib/kubelet/pods/d0e5c180-d948-4012-b87d-f3da18868659/volumes" Jan 20 15:03:11 crc kubenswrapper[4949]: I0120 15:03:11.958638 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:11 crc kubenswrapper[4949]: I0120 15:03:11.959067 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:11 crc kubenswrapper[4949]: I0120 15:03:11.982183 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.433353 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.885485 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt"] Jan 20 15:03:12 crc kubenswrapper[4949]: E0120 15:03:12.885741 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e5c180-d948-4012-b87d-f3da18868659" containerName="registry-server" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.885760 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e5c180-d948-4012-b87d-f3da18868659" containerName="registry-server" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.885891 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e5c180-d948-4012-b87d-f3da18868659" containerName="registry-server" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.886677 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.888282 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bn5t6" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.896529 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt"] Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.008886 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.009136 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.009210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.110593 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.110861 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.111037 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.111336 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.111427 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.135363 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.212480 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.648236 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt"] Jan 20 15:03:13 crc kubenswrapper[4949]: W0120 15:03:13.653612 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeb9083f_e7f3_412d_9322_122ad5dcaaf6.slice/crio-abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2 WatchSource:0}: Error finding container abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2: Status 404 returned error can't find the container with id abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2 Jan 20 15:03:14 crc kubenswrapper[4949]: I0120 15:03:14.421998 4949 generic.go:334] "Generic (PLEG): container finished" podID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerID="ad5c83338439259468d606281ac993a876bc28579e8521af3cf681fdb9ccd198" exitCode=0 Jan 20 15:03:14 crc kubenswrapper[4949]: I0120 15:03:14.422113 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerDied","Data":"ad5c83338439259468d606281ac993a876bc28579e8521af3cf681fdb9ccd198"} Jan 20 15:03:14 crc kubenswrapper[4949]: I0120 15:03:14.422386 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerStarted","Data":"abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2"} Jan 20 15:03:15 crc kubenswrapper[4949]: I0120 15:03:15.434175 4949 generic.go:334] "Generic (PLEG): container finished" podID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerID="b8e2f4c871bb26abe03346eb055e2fd315374c1955b91f431271319d451d52d4" exitCode=0 Jan 20 15:03:15 crc kubenswrapper[4949]: I0120 15:03:15.434272 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerDied","Data":"b8e2f4c871bb26abe03346eb055e2fd315374c1955b91f431271319d451d52d4"} Jan 20 15:03:16 crc kubenswrapper[4949]: I0120 15:03:16.444424 4949 generic.go:334] "Generic (PLEG): container finished" podID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerID="326f41a6ddc6bb06c08af6d69d58fa5aedff7ec32bf061a8f18c0d1ff3a3f3f7" exitCode=0 Jan 20 15:03:16 crc kubenswrapper[4949]: I0120 15:03:16.444550 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerDied","Data":"326f41a6ddc6bb06c08af6d69d58fa5aedff7ec32bf061a8f18c0d1ff3a3f3f7"} Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.676565 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.773117 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") pod \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.773184 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") pod \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.773335 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") pod \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.773900 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle" (OuterVolumeSpecName: "bundle") pod "beb9083f-e7f3-412d-9322-122ad5dcaaf6" (UID: "beb9083f-e7f3-412d-9322-122ad5dcaaf6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.774365 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.778981 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p" (OuterVolumeSpecName: "kube-api-access-wkm9p") pod "beb9083f-e7f3-412d-9322-122ad5dcaaf6" (UID: "beb9083f-e7f3-412d-9322-122ad5dcaaf6"). InnerVolumeSpecName "kube-api-access-wkm9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.788176 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util" (OuterVolumeSpecName: "util") pod "beb9083f-e7f3-412d-9322-122ad5dcaaf6" (UID: "beb9083f-e7f3-412d-9322-122ad5dcaaf6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.875694 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") on node \"crc\" DevicePath \"\"" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.875732 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") on node \"crc\" DevicePath \"\"" Jan 20 15:03:18 crc kubenswrapper[4949]: I0120 15:03:18.460340 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerDied","Data":"abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2"} Jan 20 15:03:18 crc kubenswrapper[4949]: I0120 15:03:18.460387 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2" Jan 20 15:03:18 crc kubenswrapper[4949]: I0120 15:03:18.460497 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438043 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj"] Jan 20 15:03:25 crc kubenswrapper[4949]: E0120 15:03:25.438561 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="pull" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438575 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="pull" Jan 20 15:03:25 crc kubenswrapper[4949]: E0120 15:03:25.438586 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="util" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438592 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="util" Jan 20 15:03:25 crc kubenswrapper[4949]: E0120 15:03:25.438608 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="extract" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438614 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="extract" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438747 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="extract" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.439148 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.442047 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-6tl2v" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.443574 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj"] Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.584692 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstxg\" (UniqueName: \"kubernetes.io/projected/fa13f464-1245-4c7e-ba74-47e65076c9d1-kube-api-access-nstxg\") pod \"openstack-operator-controller-init-647bfc4c5c-8vnrj\" (UID: \"fa13f464-1245-4c7e-ba74-47e65076c9d1\") " pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.685915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nstxg\" (UniqueName: \"kubernetes.io/projected/fa13f464-1245-4c7e-ba74-47e65076c9d1-kube-api-access-nstxg\") pod \"openstack-operator-controller-init-647bfc4c5c-8vnrj\" (UID: \"fa13f464-1245-4c7e-ba74-47e65076c9d1\") " pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.703860 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nstxg\" (UniqueName: \"kubernetes.io/projected/fa13f464-1245-4c7e-ba74-47e65076c9d1-kube-api-access-nstxg\") pod \"openstack-operator-controller-init-647bfc4c5c-8vnrj\" (UID: \"fa13f464-1245-4c7e-ba74-47e65076c9d1\") " pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.757755 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:26 crc kubenswrapper[4949]: I0120 15:03:26.250035 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj"] Jan 20 15:03:26 crc kubenswrapper[4949]: I0120 15:03:26.508716 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" event={"ID":"fa13f464-1245-4c7e-ba74-47e65076c9d1","Type":"ContainerStarted","Data":"58e8aa88e359c4df7771fd55a5fd36b41e4d73281397a2132711cc924ff5a1cd"} Jan 20 15:03:31 crc kubenswrapper[4949]: I0120 15:03:31.544742 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" event={"ID":"fa13f464-1245-4c7e-ba74-47e65076c9d1","Type":"ContainerStarted","Data":"8f4d0e3069b3145f5b5f51c4e3dae970ac14431f2290d416f8271e7754cf0904"} Jan 20 15:03:31 crc kubenswrapper[4949]: I0120 15:03:31.545575 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:31 crc kubenswrapper[4949]: I0120 15:03:31.580884 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" podStartSLOduration=2.087686345 podStartE2EDuration="6.580846504s" podCreationTimestamp="2026-01-20 15:03:25 +0000 UTC" firstStartedPulling="2026-01-20 15:03:26.266379386 +0000 UTC m=+802.076210244" lastFinishedPulling="2026-01-20 15:03:30.759539525 +0000 UTC m=+806.569370403" observedRunningTime="2026-01-20 15:03:31.573358036 +0000 UTC m=+807.383188894" watchObservedRunningTime="2026-01-20 15:03:31.580846504 +0000 UTC m=+807.390677402" Jan 20 15:03:35 crc kubenswrapper[4949]: I0120 15:03:35.760989 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.579369 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.580791 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.582979 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.583080 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-46qzz" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.583733 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.588624 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nzgqj" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.593911 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.594805 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.599543 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.601030 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dxkp8" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.611934 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.619267 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-m9grk"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.620294 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.622610 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-78bc6" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.640100 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.650042 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-m9grk"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.664511 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.665366 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.670065 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-fjz6j" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.684638 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696028 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bftbm\" (UniqueName: \"kubernetes.io/projected/5eae4c51-3e86-4153-8c26-d4c51b2f1331-kube-api-access-bftbm\") pod \"glance-operator-controller-manager-c6994669c-m9grk\" (UID: \"5eae4c51-3e86-4153-8c26-d4c51b2f1331\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28lm6\" (UniqueName: \"kubernetes.io/projected/070f7ba5-a528-4316-8484-4ea82fb70a40-kube-api-access-28lm6\") pod \"barbican-operator-controller-manager-7ddb5c749-jzl6b\" (UID: \"070f7ba5-a528-4316-8484-4ea82fb70a40\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696124 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw98x\" (UniqueName: \"kubernetes.io/projected/070a47eb-d68f-4208-86eb-a99f0a9ce5df-kube-api-access-bw98x\") pod \"designate-operator-controller-manager-9f958b845-vhsdx\" (UID: \"070a47eb-d68f-4208-86eb-a99f0a9ce5df\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696149 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kck6x\" (UniqueName: \"kubernetes.io/projected/c44d3483-738b-4aab-a4a2-1478480b6330-kube-api-access-kck6x\") pod \"cinder-operator-controller-manager-9b68f5989-vll8p\" (UID: \"c44d3483-738b-4aab-a4a2-1478480b6330\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696206 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwz7f\" (UniqueName: \"kubernetes.io/projected/e60d05a5-d1d5-4959-843b-654aaf547bca-kube-api-access-gwz7f\") pod \"heat-operator-controller-manager-594c8c9d5d-jxnlk\" (UID: \"e60d05a5-d1d5-4959-843b-654aaf547bca\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.703171 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.704785 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.709287 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5t25b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.714402 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.721714 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.723157 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.726790 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.727022 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cl6dx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.734174 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.741840 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.742702 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.745977 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-tcs88" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.756035 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.756918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.762202 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lb2fw" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.762415 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.765917 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.766829 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.786000 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gvf4w" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.798939 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw98x\" (UniqueName: \"kubernetes.io/projected/070a47eb-d68f-4208-86eb-a99f0a9ce5df-kube-api-access-bw98x\") pod \"designate-operator-controller-manager-9f958b845-vhsdx\" (UID: \"070a47eb-d68f-4208-86eb-a99f0a9ce5df\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808110 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kck6x\" (UniqueName: \"kubernetes.io/projected/c44d3483-738b-4aab-a4a2-1478480b6330-kube-api-access-kck6x\") pod \"cinder-operator-controller-manager-9b68f5989-vll8p\" (UID: \"c44d3483-738b-4aab-a4a2-1478480b6330\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808332 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwz7f\" (UniqueName: \"kubernetes.io/projected/e60d05a5-d1d5-4959-843b-654aaf547bca-kube-api-access-gwz7f\") pod \"heat-operator-controller-manager-594c8c9d5d-jxnlk\" (UID: \"e60d05a5-d1d5-4959-843b-654aaf547bca\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808578 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bftbm\" (UniqueName: \"kubernetes.io/projected/5eae4c51-3e86-4153-8c26-d4c51b2f1331-kube-api-access-bftbm\") pod \"glance-operator-controller-manager-c6994669c-m9grk\" (UID: \"5eae4c51-3e86-4153-8c26-d4c51b2f1331\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808675 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28lm6\" (UniqueName: \"kubernetes.io/projected/070f7ba5-a528-4316-8484-4ea82fb70a40-kube-api-access-28lm6\") pod \"barbican-operator-controller-manager-7ddb5c749-jzl6b\" (UID: \"070f7ba5-a528-4316-8484-4ea82fb70a40\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.831175 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.840690 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw98x\" (UniqueName: \"kubernetes.io/projected/070a47eb-d68f-4208-86eb-a99f0a9ce5df-kube-api-access-bw98x\") pod \"designate-operator-controller-manager-9f958b845-vhsdx\" (UID: \"070a47eb-d68f-4208-86eb-a99f0a9ce5df\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.848265 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwz7f\" (UniqueName: \"kubernetes.io/projected/e60d05a5-d1d5-4959-843b-654aaf547bca-kube-api-access-gwz7f\") pod \"heat-operator-controller-manager-594c8c9d5d-jxnlk\" (UID: \"e60d05a5-d1d5-4959-843b-654aaf547bca\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.853679 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28lm6\" (UniqueName: \"kubernetes.io/projected/070f7ba5-a528-4316-8484-4ea82fb70a40-kube-api-access-28lm6\") pod \"barbican-operator-controller-manager-7ddb5c749-jzl6b\" (UID: \"070f7ba5-a528-4316-8484-4ea82fb70a40\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.855470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bftbm\" (UniqueName: \"kubernetes.io/projected/5eae4c51-3e86-4153-8c26-d4c51b2f1331-kube-api-access-bftbm\") pod \"glance-operator-controller-manager-c6994669c-m9grk\" (UID: \"5eae4c51-3e86-4153-8c26-d4c51b2f1331\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.866246 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kck6x\" (UniqueName: \"kubernetes.io/projected/c44d3483-738b-4aab-a4a2-1478480b6330-kube-api-access-kck6x\") pod \"cinder-operator-controller-manager-9b68f5989-vll8p\" (UID: \"c44d3483-738b-4aab-a4a2-1478480b6330\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.868617 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.869405 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.872558 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2rc64" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.874552 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.875294 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.881174 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-494nf" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.890456 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.904826 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.908995 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.913889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrjs\" (UniqueName: \"kubernetes.io/projected/05642ba7-89bd-4d72-a31b-4e6d4532923e-kube-api-access-8rrjs\") pod \"horizon-operator-controller-manager-77d5c5b54f-5vwt4\" (UID: \"05642ba7-89bd-4d72-a31b-4e6d4532923e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.913924 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr5kj\" (UniqueName: \"kubernetes.io/projected/c07420af-b163-4ab6-8a1c-5e697629cab0-kube-api-access-xr5kj\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.913976 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.914097 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5vf\" (UniqueName: \"kubernetes.io/projected/2dacfd0a-8e74-4eb1-b4cb-892ae16a9291-kube-api-access-bq5vf\") pod \"manila-operator-controller-manager-864f6b75bf-ft9st\" (UID: \"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.914139 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vhn\" (UniqueName: \"kubernetes.io/projected/57182814-f19c-4247-b774-5b01afe7d680-kube-api-access-77vhn\") pod \"ironic-operator-controller-manager-78757b4889-bt9wn\" (UID: \"57182814-f19c-4247-b774-5b01afe7d680\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.914192 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjfvz\" (UniqueName: \"kubernetes.io/projected/d6706563-2c93-414e-bb49-cd74ae82d235-kube-api-access-kjfvz\") pod \"keystone-operator-controller-manager-767fdc4f47-th6cb\" (UID: \"d6706563-2c93-414e-bb49-cd74ae82d235\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.923387 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.927235 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.928241 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.930166 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2g2wl" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.935594 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.936030 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.936805 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.943031 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-28rt8" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.943231 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.943629 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.958003 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.975337 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.976690 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.979713 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pqgbt" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.981321 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.981686 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.982421 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.988177 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tpmfw" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.988381 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.990667 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.996414 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.997365 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.001708 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-p784k" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.004876 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.012205 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015462 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vhn\" (UniqueName: \"kubernetes.io/projected/57182814-f19c-4247-b774-5b01afe7d680-kube-api-access-77vhn\") pod \"ironic-operator-controller-manager-78757b4889-bt9wn\" (UID: \"57182814-f19c-4247-b774-5b01afe7d680\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015546 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282mc\" (UniqueName: \"kubernetes.io/projected/a87686a4-1af3-4d05-ac2d-15551c80e0d7-kube-api-access-282mc\") pod \"mariadb-operator-controller-manager-c87fff755-tj7jv\" (UID: \"a87686a4-1af3-4d05-ac2d-15551c80e0d7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015572 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjfvz\" (UniqueName: \"kubernetes.io/projected/d6706563-2c93-414e-bb49-cd74ae82d235-kube-api-access-kjfvz\") pod \"keystone-operator-controller-manager-767fdc4f47-th6cb\" (UID: \"d6706563-2c93-414e-bb49-cd74ae82d235\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015602 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrjs\" (UniqueName: \"kubernetes.io/projected/05642ba7-89bd-4d72-a31b-4e6d4532923e-kube-api-access-8rrjs\") pod \"horizon-operator-controller-manager-77d5c5b54f-5vwt4\" (UID: \"05642ba7-89bd-4d72-a31b-4e6d4532923e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015621 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b242h\" (UniqueName: \"kubernetes.io/projected/017942ba-9ec1-4474-91e5-7adb1481e807-kube-api-access-b242h\") pod \"neutron-operator-controller-manager-cb4666565-ljxrw\" (UID: \"017942ba-9ec1-4474-91e5-7adb1481e807\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015643 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr5kj\" (UniqueName: \"kubernetes.io/projected/c07420af-b163-4ab6-8a1c-5e697629cab0-kube-api-access-xr5kj\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015679 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015726 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k855k\" (UniqueName: \"kubernetes.io/projected/728be0e4-4dde-4f00-be4f-af6590d7025b-kube-api-access-k855k\") pod \"nova-operator-controller-manager-65849867d6-cc9zv\" (UID: \"728be0e4-4dde-4f00-be4f-af6590d7025b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015753 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq5vf\" (UniqueName: \"kubernetes.io/projected/2dacfd0a-8e74-4eb1-b4cb-892ae16a9291-kube-api-access-bq5vf\") pod \"manila-operator-controller-manager-864f6b75bf-ft9st\" (UID: \"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.015860 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.015932 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:56.515915253 +0000 UTC m=+832.325746111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.029485 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.030583 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.037200 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vhn\" (UniqueName: \"kubernetes.io/projected/57182814-f19c-4247-b774-5b01afe7d680-kube-api-access-77vhn\") pod \"ironic-operator-controller-manager-78757b4889-bt9wn\" (UID: \"57182814-f19c-4247-b774-5b01afe7d680\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.037295 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b7wzx" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.039938 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjfvz\" (UniqueName: \"kubernetes.io/projected/d6706563-2c93-414e-bb49-cd74ae82d235-kube-api-access-kjfvz\") pod \"keystone-operator-controller-manager-767fdc4f47-th6cb\" (UID: \"d6706563-2c93-414e-bb49-cd74ae82d235\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.045156 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrjs\" (UniqueName: \"kubernetes.io/projected/05642ba7-89bd-4d72-a31b-4e6d4532923e-kube-api-access-8rrjs\") pod \"horizon-operator-controller-manager-77d5c5b54f-5vwt4\" (UID: \"05642ba7-89bd-4d72-a31b-4e6d4532923e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.049389 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr5kj\" (UniqueName: \"kubernetes.io/projected/c07420af-b163-4ab6-8a1c-5e697629cab0-kube-api-access-xr5kj\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.052557 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq5vf\" (UniqueName: \"kubernetes.io/projected/2dacfd0a-8e74-4eb1-b4cb-892ae16a9291-kube-api-access-bq5vf\") pod \"manila-operator-controller-manager-864f6b75bf-ft9st\" (UID: \"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.065459 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.071569 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.084328 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.085388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.090628 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hnhmq" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.100709 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118418 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bl4s\" (UniqueName: \"kubernetes.io/projected/d02df557-c289-4444-b29b-917ea271a874-kube-api-access-2bl4s\") pod \"octavia-operator-controller-manager-7fc9b76cf6-g87xm\" (UID: \"d02df557-c289-4444-b29b-917ea271a874\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118463 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k855k\" (UniqueName: \"kubernetes.io/projected/728be0e4-4dde-4f00-be4f-af6590d7025b-kube-api-access-k855k\") pod \"nova-operator-controller-manager-65849867d6-cc9zv\" (UID: \"728be0e4-4dde-4f00-be4f-af6590d7025b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118502 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e-kube-api-access-w7m7g\") pod \"ovn-operator-controller-manager-55db956ddc-f52ph\" (UID: \"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118581 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-282mc\" (UniqueName: \"kubernetes.io/projected/a87686a4-1af3-4d05-ac2d-15551c80e0d7-kube-api-access-282mc\") pod \"mariadb-operator-controller-manager-c87fff755-tj7jv\" (UID: \"a87686a4-1af3-4d05-ac2d-15551c80e0d7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118606 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118635 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7vz\" (UniqueName: \"kubernetes.io/projected/58fdba15-e8ba-47fa-aca8-90f638577a6b-kube-api-access-2t7vz\") pod \"placement-operator-controller-manager-686df47fcb-4kwz9\" (UID: \"58fdba15-e8ba-47fa-aca8-90f638577a6b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118653 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b242h\" (UniqueName: \"kubernetes.io/projected/017942ba-9ec1-4474-91e5-7adb1481e807-kube-api-access-b242h\") pod \"neutron-operator-controller-manager-cb4666565-ljxrw\" (UID: \"017942ba-9ec1-4474-91e5-7adb1481e807\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mg6\" (UniqueName: \"kubernetes.io/projected/db4c21b1-de25-4c17-a3c3-e6eea4044d77-kube-api-access-w4mg6\") pod \"swift-operator-controller-manager-85dd56d4cc-nr2lr\" (UID: \"db4c21b1-de25-4c17-a3c3-e6eea4044d77\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9gv\" (UniqueName: \"kubernetes.io/projected/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-kube-api-access-zn9gv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.134267 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.157381 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-282mc\" (UniqueName: \"kubernetes.io/projected/a87686a4-1af3-4d05-ac2d-15551c80e0d7-kube-api-access-282mc\") pod \"mariadb-operator-controller-manager-c87fff755-tj7jv\" (UID: \"a87686a4-1af3-4d05-ac2d-15551c80e0d7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.157641 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k855k\" (UniqueName: \"kubernetes.io/projected/728be0e4-4dde-4f00-be4f-af6590d7025b-kube-api-access-k855k\") pod \"nova-operator-controller-manager-65849867d6-cc9zv\" (UID: \"728be0e4-4dde-4f00-be4f-af6590d7025b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.157721 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b242h\" (UniqueName: \"kubernetes.io/projected/017942ba-9ec1-4474-91e5-7adb1481e807-kube-api-access-b242h\") pod \"neutron-operator-controller-manager-cb4666565-ljxrw\" (UID: \"017942ba-9ec1-4474-91e5-7adb1481e807\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.178408 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-869947677f-8qg9p"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.179272 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.181740 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fdv8t" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.189497 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-869947677f-8qg9p"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.204366 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.220490 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e-kube-api-access-w7m7g\") pod \"ovn-operator-controller-manager-55db956ddc-f52ph\" (UID: \"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222203 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222249 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvt7h\" (UniqueName: \"kubernetes.io/projected/dc5c569e-c0ee-44bc-bdc9-397ab5941ad5-kube-api-access-gvt7h\") pod \"telemetry-operator-controller-manager-5f8f495fcf-94wzp\" (UID: \"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222280 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7vz\" (UniqueName: \"kubernetes.io/projected/58fdba15-e8ba-47fa-aca8-90f638577a6b-kube-api-access-2t7vz\") pod \"placement-operator-controller-manager-686df47fcb-4kwz9\" (UID: \"58fdba15-e8ba-47fa-aca8-90f638577a6b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222305 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mg6\" (UniqueName: \"kubernetes.io/projected/db4c21b1-de25-4c17-a3c3-e6eea4044d77-kube-api-access-w4mg6\") pod \"swift-operator-controller-manager-85dd56d4cc-nr2lr\" (UID: \"db4c21b1-de25-4c17-a3c3-e6eea4044d77\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9gv\" (UniqueName: \"kubernetes.io/projected/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-kube-api-access-zn9gv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222414 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bl4s\" (UniqueName: \"kubernetes.io/projected/d02df557-c289-4444-b29b-917ea271a874-kube-api-access-2bl4s\") pod \"octavia-operator-controller-manager-7fc9b76cf6-g87xm\" (UID: \"d02df557-c289-4444-b29b-917ea271a874\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.223475 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.223532 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:03:56.723501438 +0000 UTC m=+832.533332296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.223990 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.244137 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.245643 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.258732 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-25j8l" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.259446 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.260324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9gv\" (UniqueName: \"kubernetes.io/projected/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-kube-api-access-zn9gv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.277422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mg6\" (UniqueName: \"kubernetes.io/projected/db4c21b1-de25-4c17-a3c3-e6eea4044d77-kube-api-access-w4mg6\") pod \"swift-operator-controller-manager-85dd56d4cc-nr2lr\" (UID: \"db4c21b1-de25-4c17-a3c3-e6eea4044d77\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.277526 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e-kube-api-access-w7m7g\") pod \"ovn-operator-controller-manager-55db956ddc-f52ph\" (UID: \"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.278163 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bl4s\" (UniqueName: \"kubernetes.io/projected/d02df557-c289-4444-b29b-917ea271a874-kube-api-access-2bl4s\") pod \"octavia-operator-controller-manager-7fc9b76cf6-g87xm\" (UID: \"d02df557-c289-4444-b29b-917ea271a874\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.281715 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7vz\" (UniqueName: \"kubernetes.io/projected/58fdba15-e8ba-47fa-aca8-90f638577a6b-kube-api-access-2t7vz\") pod \"placement-operator-controller-manager-686df47fcb-4kwz9\" (UID: \"58fdba15-e8ba-47fa-aca8-90f638577a6b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.301602 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.328065 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztcl\" (UniqueName: \"kubernetes.io/projected/68de7d27-2202-473a-b077-d03d033244a2-kube-api-access-kztcl\") pod \"watcher-operator-controller-manager-64cd966744-jc5mh\" (UID: \"68de7d27-2202-473a-b077-d03d033244a2\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.328186 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzh7d\" (UniqueName: \"kubernetes.io/projected/63acb80f-21b4-4255-af60-03a68dd07658-kube-api-access-rzh7d\") pod \"test-operator-controller-manager-869947677f-8qg9p\" (UID: \"63acb80f-21b4-4255-af60-03a68dd07658\") " pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.328286 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvt7h\" (UniqueName: \"kubernetes.io/projected/dc5c569e-c0ee-44bc-bdc9-397ab5941ad5-kube-api-access-gvt7h\") pod \"telemetry-operator-controller-manager-5f8f495fcf-94wzp\" (UID: \"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.348271 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.348333 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.357414 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.363927 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.366869 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nhvd5" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.366889 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.367063 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.374763 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.375065 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvt7h\" (UniqueName: \"kubernetes.io/projected/dc5c569e-c0ee-44bc-bdc9-397ab5941ad5-kube-api-access-gvt7h\") pod \"telemetry-operator-controller-manager-5f8f495fcf-94wzp\" (UID: \"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.378067 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.394053 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.403242 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.404343 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.412076 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nxhjp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.420732 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.429214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kztcl\" (UniqueName: \"kubernetes.io/projected/68de7d27-2202-473a-b077-d03d033244a2-kube-api-access-kztcl\") pod \"watcher-operator-controller-manager-64cd966744-jc5mh\" (UID: \"68de7d27-2202-473a-b077-d03d033244a2\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.429270 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzh7d\" (UniqueName: \"kubernetes.io/projected/63acb80f-21b4-4255-af60-03a68dd07658-kube-api-access-rzh7d\") pod \"test-operator-controller-manager-869947677f-8qg9p\" (UID: \"63acb80f-21b4-4255-af60-03a68dd07658\") " pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.453597 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kztcl\" (UniqueName: \"kubernetes.io/projected/68de7d27-2202-473a-b077-d03d033244a2-kube-api-access-kztcl\") pod \"watcher-operator-controller-manager-64cd966744-jc5mh\" (UID: \"68de7d27-2202-473a-b077-d03d033244a2\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.460225 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzh7d\" (UniqueName: \"kubernetes.io/projected/63acb80f-21b4-4255-af60-03a68dd07658-kube-api-access-rzh7d\") pod \"test-operator-controller-manager-869947677f-8qg9p\" (UID: \"63acb80f-21b4-4255-af60-03a68dd07658\") " pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.504824 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.524307 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531370 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjv9c\" (UniqueName: \"kubernetes.io/projected/ec1b1a5b-0d86-40b4-9410-397d183776d0-kube-api-access-cjv9c\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531437 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531563 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531632 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jz8\" (UniqueName: \"kubernetes.io/projected/d770793b-0e56-43cc-9707-5d062b8f7c82-kube-api-access-r7jz8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pzpkv\" (UID: \"d770793b-0e56-43cc-9707-5d062b8f7c82\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.531794 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.531837 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:57.531822059 +0000 UTC m=+833.341652917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.576824 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.583122 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.633285 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jz8\" (UniqueName: \"kubernetes.io/projected/d770793b-0e56-43cc-9707-5d062b8f7c82-kube-api-access-r7jz8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pzpkv\" (UID: \"d770793b-0e56-43cc-9707-5d062b8f7c82\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.633343 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjv9c\" (UniqueName: \"kubernetes.io/projected/ec1b1a5b-0d86-40b4-9410-397d183776d0-kube-api-access-cjv9c\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.633381 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.633445 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.633728 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.633797 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:57.133780922 +0000 UTC m=+832.943611780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.634085 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.634118 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:57.134109992 +0000 UTC m=+832.943940850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.634281 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.666767 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jz8\" (UniqueName: \"kubernetes.io/projected/d770793b-0e56-43cc-9707-5d062b8f7c82-kube-api-access-r7jz8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pzpkv\" (UID: \"d770793b-0e56-43cc-9707-5d062b8f7c82\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.667767 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjv9c\" (UniqueName: \"kubernetes.io/projected/ec1b1a5b-0d86-40b4-9410-397d183776d0-kube-api-access-cjv9c\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.684899 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.732721 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.735596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.735734 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.735780 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:03:57.735767905 +0000 UTC m=+833.545598763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.743788 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.961638 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.979438 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-m9grk"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.985052 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.990190 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.064745 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eae4c51_3e86_4153_8c26_d4c51b2f1331.slice/crio-46485eb8ed1e82cdf669203d0d85ae61f25defe2b013fbed92f9060452ff5f8f WatchSource:0}: Error finding container 46485eb8ed1e82cdf669203d0d85ae61f25defe2b013fbed92f9060452ff5f8f: Status 404 returned error can't find the container with id 46485eb8ed1e82cdf669203d0d85ae61f25defe2b013fbed92f9060452ff5f8f Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.141752 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.141825 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.141894 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.141940 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.141948 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:58.141935443 +0000 UTC m=+833.951766301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.141966 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:58.141958643 +0000 UTC m=+833.951789501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.390948 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.397717 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728be0e4_4dde_4f00_be4f_af6590d7025b.slice/crio-aa7fc4f598e86a8c2d98c19d0120b610b8fe2c52a1419d446ee61399bc4a7c90 WatchSource:0}: Error finding container aa7fc4f598e86a8c2d98c19d0120b610b8fe2c52a1419d446ee61399bc4a7c90: Status 404 returned error can't find the container with id aa7fc4f598e86a8c2d98c19d0120b610b8fe2c52a1419d446ee61399bc4a7c90 Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.422711 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.457651 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.463542 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.469068 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.486989 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc5c569e_c0ee_44bc_bdc9_397ab5941ad5.slice/crio-476923e68e8b0a41864b97b4efb5f42413b916a35e0e1a72dd2643211e63fee6 WatchSource:0}: Error finding container 476923e68e8b0a41864b97b4efb5f42413b916a35e0e1a72dd2643211e63fee6: Status 404 returned error can't find the container with id 476923e68e8b0a41864b97b4efb5f42413b916a35e0e1a72dd2643211e63fee6 Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.507508 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.516253 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod017942ba_9ec1_4474_91e5_7adb1481e807.slice/crio-09c66d9fe557c52bbc8e7521350402aec4ca7cb8b0bd46fb02db81846c958a18 WatchSource:0}: Error finding container 09c66d9fe557c52bbc8e7521350402aec4ca7cb8b0bd46fb02db81846c958a18: Status 404 returned error can't find the container with id 09c66d9fe557c52bbc8e7521350402aec4ca7cb8b0bd46fb02db81846c958a18 Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.516618 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.517771 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05642ba7_89bd_4d72_a31b_4e6d4532923e.slice/crio-05b13f49bb99afa9efd9cc086918184520732c11da7263624fe03422216ebfab WatchSource:0}: Error finding container 05b13f49bb99afa9efd9cc086918184520732c11da7263624fe03422216ebfab: Status 404 returned error can't find the container with id 05b13f49bb99afa9efd9cc086918184520732c11da7263624fe03422216ebfab Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.520740 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb4c21b1_de25_4c17_a3c3_e6eea4044d77.slice/crio-33bb291bac2ff82cf894a60b779df89640662d58373d2ebc79230debd4bdd886 WatchSource:0}: Error finding container 33bb291bac2ff82cf894a60b779df89640662d58373d2ebc79230debd4bdd886: Status 404 returned error can't find the container with id 33bb291bac2ff82cf894a60b779df89640662d58373d2ebc79230debd4bdd886 Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.520854 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b242h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-ljxrw_openstack-operators(017942ba-9ec1-4474-91e5-7adb1481e807): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.520879 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8rrjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-5vwt4_openstack-operators(05642ba7-89bd-4d72-a31b-4e6d4532923e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.522368 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" podUID="05642ba7-89bd-4d72-a31b-4e6d4532923e" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.522382 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" podUID="017942ba-9ec1-4474-91e5-7adb1481e807" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.523120 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9"] Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.523790 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4mg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-nr2lr_openstack-operators(db4c21b1-de25-4c17-a3c3-e6eea4044d77): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.524297 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd02df557_c289_4444_b29b_917ea271a874.slice/crio-44bd3ec50709b7752f5ac6cb6922767de43405445058a8a8073b2c686daeee81 WatchSource:0}: Error finding container 44bd3ec50709b7752f5ac6cb6922767de43405445058a8a8073b2c686daeee81: Status 404 returned error can't find the container with id 44bd3ec50709b7752f5ac6cb6922767de43405445058a8a8073b2c686daeee81 Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.525392 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" podUID="db4c21b1-de25-4c17-a3c3-e6eea4044d77" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.528954 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2bl4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-g87xm_openstack-operators(d02df557-c289-4444-b29b-917ea271a874): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.530460 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" podUID="d02df557-c289-4444-b29b-917ea271a874" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.531180 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.538192 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.543096 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.549971 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.550176 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.550332 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:59.550305879 +0000 UTC m=+835.360136747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.614769 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-869947677f-8qg9p"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.670045 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.701985 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.716354 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" event={"ID":"d02df557-c289-4444-b29b-917ea271a874","Type":"ContainerStarted","Data":"44bd3ec50709b7752f5ac6cb6922767de43405445058a8a8073b2c686daeee81"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.718600 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" podUID="d02df557-c289-4444-b29b-917ea271a874" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.725536 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" event={"ID":"63acb80f-21b4-4255-af60-03a68dd07658","Type":"ContainerStarted","Data":"9e65ac1944fe94635f159bf2febf4c281e815f5269823a8f05daf26da9bbac39"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.727435 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" event={"ID":"e60d05a5-d1d5-4959-843b-654aaf547bca","Type":"ContainerStarted","Data":"9b4669810b421e585ac06697c3022766dfcc043f88a798897388d1d171da0c10"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.728771 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" event={"ID":"57182814-f19c-4247-b774-5b01afe7d680","Type":"ContainerStarted","Data":"4dd748d6572dde0fb6e24953afa69276410f307c907dd4c26af6469aa3a34832"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.730041 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" event={"ID":"070a47eb-d68f-4208-86eb-a99f0a9ce5df","Type":"ContainerStarted","Data":"5707d64ee29610801066ac15d60d7045e36913d88dee4078d7258fbea6c5dd34"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.731216 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" event={"ID":"070f7ba5-a528-4316-8484-4ea82fb70a40","Type":"ContainerStarted","Data":"97d49aef181968404d37cb2582664f3a9e7ac3d880fa6f9edea6fc5ada1d1cb5"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.732212 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" event={"ID":"05642ba7-89bd-4d72-a31b-4e6d4532923e","Type":"ContainerStarted","Data":"05b13f49bb99afa9efd9cc086918184520732c11da7263624fe03422216ebfab"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.735792 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" podUID="05642ba7-89bd-4d72-a31b-4e6d4532923e" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.736546 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" event={"ID":"017942ba-9ec1-4474-91e5-7adb1481e807","Type":"ContainerStarted","Data":"09c66d9fe557c52bbc8e7521350402aec4ca7cb8b0bd46fb02db81846c958a18"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.737996 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" podUID="017942ba-9ec1-4474-91e5-7adb1481e807" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.740925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" event={"ID":"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5","Type":"ContainerStarted","Data":"476923e68e8b0a41864b97b4efb5f42413b916a35e0e1a72dd2643211e63fee6"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.754874 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.755114 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.755326 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:03:59.75516008 +0000 UTC m=+835.564990938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.760443 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" event={"ID":"a87686a4-1af3-4d05-ac2d-15551c80e0d7","Type":"ContainerStarted","Data":"23d7bb8e58edaa62e2366a1088bb6a3cce6d21028920d0e40545f79fe1ae7e32"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.764173 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" event={"ID":"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e","Type":"ContainerStarted","Data":"37ca5b59d799aba0e1b4d07925d4557276bacefaa7d6478093be9651e9d97cd5"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.770084 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" event={"ID":"5eae4c51-3e86-4153-8c26-d4c51b2f1331","Type":"ContainerStarted","Data":"46485eb8ed1e82cdf669203d0d85ae61f25defe2b013fbed92f9060452ff5f8f"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.780070 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" event={"ID":"db4c21b1-de25-4c17-a3c3-e6eea4044d77","Type":"ContainerStarted","Data":"33bb291bac2ff82cf894a60b779df89640662d58373d2ebc79230debd4bdd886"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.780307 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" podUID="db4c21b1-de25-4c17-a3c3-e6eea4044d77" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.781488 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" event={"ID":"728be0e4-4dde-4f00-be4f-af6590d7025b","Type":"ContainerStarted","Data":"aa7fc4f598e86a8c2d98c19d0120b610b8fe2c52a1419d446ee61399bc4a7c90"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.788412 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" event={"ID":"c44d3483-738b-4aab-a4a2-1478480b6330","Type":"ContainerStarted","Data":"9b0764e089e99b2c400628d4b91cacb3b39158ceb9fd02a4ac2bade391443316"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.790617 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" event={"ID":"d6706563-2c93-414e-bb49-cd74ae82d235","Type":"ContainerStarted","Data":"1e67cd1ebfaaf483fcbeac9a8459e54e81bddecce452c8600a13a34bf8ff3332"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.792408 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kztcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-jc5mh_openstack-operators(68de7d27-2202-473a-b077-d03d033244a2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.793053 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" event={"ID":"58fdba15-e8ba-47fa-aca8-90f638577a6b","Type":"ContainerStarted","Data":"7fa8dc4825e204959c44029062fb61b9a506ef29f8d33244fecbdc09198430c4"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.793636 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" podUID="68de7d27-2202-473a-b077-d03d033244a2" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.799497 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" event={"ID":"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291","Type":"ContainerStarted","Data":"992785306f5d335f8d07ec696164ebff4db246d8265fadee32a9491c45deff3b"} Jan 20 15:03:58 crc kubenswrapper[4949]: I0120 15:03:58.160534 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:58 crc kubenswrapper[4949]: I0120 15:03:58.160934 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.160806 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.161066 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:00.1610505 +0000 UTC m=+835.970881358 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.161011 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.161099 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:00.161093892 +0000 UTC m=+835.970924750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:03:58 crc kubenswrapper[4949]: I0120 15:03:58.813959 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" event={"ID":"d770793b-0e56-43cc-9707-5d062b8f7c82","Type":"ContainerStarted","Data":"380e4d2d80e12ecbd6bf1f07861e6080f0d351efb3a269cd7fc38ddb58e7d051"} Jan 20 15:03:58 crc kubenswrapper[4949]: I0120 15:03:58.818047 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" event={"ID":"68de7d27-2202-473a-b077-d03d033244a2","Type":"ContainerStarted","Data":"188418b2a14d5e54ec3a7231371c638caa209beee7d6abdb67fb8cc371919648"} Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.820714 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" podUID="db4c21b1-de25-4c17-a3c3-e6eea4044d77" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.821225 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" podUID="68de7d27-2202-473a-b077-d03d033244a2" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.821351 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" podUID="017942ba-9ec1-4474-91e5-7adb1481e807" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.821911 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" podUID="d02df557-c289-4444-b29b-917ea271a874" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.822502 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" podUID="05642ba7-89bd-4d72-a31b-4e6d4532923e" Jan 20 15:03:59 crc kubenswrapper[4949]: I0120 15:03:59.583028 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.583219 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.583299 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:03.583280894 +0000 UTC m=+839.393111742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:59 crc kubenswrapper[4949]: I0120 15:03:59.787056 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.787480 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.787626 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:04:03.787611091 +0000 UTC m=+839.597441949 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.834156 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" podUID="68de7d27-2202-473a-b077-d03d033244a2" Jan 20 15:04:00 crc kubenswrapper[4949]: I0120 15:04:00.193869 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:00 crc kubenswrapper[4949]: I0120 15:04:00.193953 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:00 crc kubenswrapper[4949]: E0120 15:04:00.194070 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:04:00 crc kubenswrapper[4949]: E0120 15:04:00.194070 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:04:00 crc kubenswrapper[4949]: E0120 15:04:00.194127 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:04.194110349 +0000 UTC m=+840.003941207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:04:00 crc kubenswrapper[4949]: E0120 15:04:00.194309 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:04.194268934 +0000 UTC m=+840.004099852 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:04:03 crc kubenswrapper[4949]: I0120 15:04:03.678216 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:03 crc kubenswrapper[4949]: E0120 15:04:03.678622 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:04:03 crc kubenswrapper[4949]: E0120 15:04:03.678670 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:11.67865614 +0000 UTC m=+847.488486998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:04:03 crc kubenswrapper[4949]: I0120 15:04:03.881206 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:03 crc kubenswrapper[4949]: E0120 15:04:03.881410 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:04:03 crc kubenswrapper[4949]: E0120 15:04:03.882915 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:04:11.882893534 +0000 UTC m=+847.692724472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:04:04 crc kubenswrapper[4949]: I0120 15:04:04.287663 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:04 crc kubenswrapper[4949]: I0120 15:04:04.287771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:04 crc kubenswrapper[4949]: E0120 15:04:04.287820 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:04:04 crc kubenswrapper[4949]: E0120 15:04:04.287914 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:12.287893956 +0000 UTC m=+848.097724814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:04:04 crc kubenswrapper[4949]: E0120 15:04:04.288066 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:04:04 crc kubenswrapper[4949]: E0120 15:04:04.288198 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:12.288151314 +0000 UTC m=+848.097982232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:04:10 crc kubenswrapper[4949]: E0120 15:04:10.712643 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 20 15:04:10 crc kubenswrapper[4949]: E0120 15:04:10.713402 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7m7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-f52ph_openstack-operators(ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:10 crc kubenswrapper[4949]: E0120 15:04:10.714570 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" podUID="ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e" Jan 20 15:04:10 crc kubenswrapper[4949]: E0120 15:04:10.900651 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" podUID="ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e" Jan 20 15:04:11 crc kubenswrapper[4949]: E0120 15:04:11.278794 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 20 15:04:11 crc kubenswrapper[4949]: E0120 15:04:11.279002 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-282mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-tj7jv_openstack-operators(a87686a4-1af3-4d05-ac2d-15551c80e0d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:11 crc kubenswrapper[4949]: E0120 15:04:11.280151 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" podUID="a87686a4-1af3-4d05-ac2d-15551c80e0d7" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.765524 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.774587 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:11 crc kubenswrapper[4949]: E0120 15:04:11.906501 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" podUID="a87686a4-1af3-4d05-ac2d-15551c80e0d7" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.967655 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cl6dx" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.967882 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.975130 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.976695 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.025171 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tpmfw" Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.034002 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.311375 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.312581 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjfvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-th6cb_openstack-operators(d6706563-2c93-414e-bb49-cd74ae82d235): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.313755 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" podUID="d6706563-2c93-414e-bb49-cd74ae82d235" Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.374402 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.374486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.374721 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.374780 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:28.374761124 +0000 UTC m=+864.184591982 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.383476 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.921064 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" podUID="d6706563-2c93-414e-bb49-cd74ae82d235" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.931892 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.932047 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bq5vf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-ft9st_openstack-operators(2dacfd0a-8e74-4eb1-b4cb-892ae16a9291): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.933356 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" podUID="2dacfd0a-8e74-4eb1-b4cb-892ae16a9291" Jan 20 15:04:13 crc kubenswrapper[4949]: E0120 15:04:13.920762 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" podUID="2dacfd0a-8e74-4eb1-b4cb-892ae16a9291" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.053635 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.054082 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bw98x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-vhsdx_openstack-operators(070a47eb-d68f-4208-86eb-a99f0a9ce5df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.055789 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" podUID="070a47eb-d68f-4208-86eb-a99f0a9ce5df" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.355772 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.145:5001/openstack-k8s-operators/test-operator:d13a5aac38c8137de82b9d4aecf30e64d0d93ea1" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.355819 4949 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.145:5001/openstack-k8s-operators/test-operator:d13a5aac38c8137de82b9d4aecf30e64d0d93ea1" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.355938 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.145:5001/openstack-k8s-operators/test-operator:d13a5aac38c8137de82b9d4aecf30e64d0d93ea1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rzh7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-869947677f-8qg9p_openstack-operators(63acb80f-21b4-4255-af60-03a68dd07658): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.357113 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" podUID="63acb80f-21b4-4255-af60-03a68dd07658" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.834976 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.835159 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k855k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-cc9zv_openstack-operators(728be0e4-4dde-4f00-be4f-af6590d7025b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.836358 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" podUID="728be0e4-4dde-4f00-be4f-af6590d7025b" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.940989 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" podUID="728be0e4-4dde-4f00-be4f-af6590d7025b" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.940992 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.145:5001/openstack-k8s-operators/test-operator:d13a5aac38c8137de82b9d4aecf30e64d0d93ea1\\\"\"" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" podUID="63acb80f-21b4-4255-af60-03a68dd07658" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.941808 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" podUID="070a47eb-d68f-4208-86eb-a99f0a9ce5df" Jan 20 15:04:17 crc kubenswrapper[4949]: E0120 15:04:17.272155 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 20 15:04:17 crc kubenswrapper[4949]: E0120 15:04:17.272438 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7jz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-pzpkv_openstack-operators(d770793b-0e56-43cc-9707-5d062b8f7c82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:17 crc kubenswrapper[4949]: E0120 15:04:17.274666 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" podUID="d770793b-0e56-43cc-9707-5d062b8f7c82" Jan 20 15:04:17 crc kubenswrapper[4949]: E0120 15:04:17.945742 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" podUID="d770793b-0e56-43cc-9707-5d062b8f7c82" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.470782 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89"] Jan 20 15:04:20 crc kubenswrapper[4949]: W0120 15:04:20.486669 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07420af_b163_4ab6_8a1c_5e697629cab0.slice/crio-d83d2de473c6e62a6746b1fb409635a375cad2f96da70d20806c151c8c4b6e53 WatchSource:0}: Error finding container d83d2de473c6e62a6746b1fb409635a375cad2f96da70d20806c151c8c4b6e53: Status 404 returned error can't find the container with id d83d2de473c6e62a6746b1fb409635a375cad2f96da70d20806c151c8c4b6e53 Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.569381 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg"] Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.964815 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" event={"ID":"017942ba-9ec1-4474-91e5-7adb1481e807","Type":"ContainerStarted","Data":"069c00eb50e502ee5495e88a7b24e53b82536c38188a4a0f3c174858aa4e33f1"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.966028 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.967810 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" event={"ID":"d02df557-c289-4444-b29b-917ea271a874","Type":"ContainerStarted","Data":"61956e59672aa848f27a04dda56a67c5f78cf361894360cb65adbba970b4bc34"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.968042 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.969430 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" event={"ID":"68de7d27-2202-473a-b077-d03d033244a2","Type":"ContainerStarted","Data":"8fd574f89497a788420b690c83e718a2fd8b3793679a86900e4a2a9eeaa49435"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.969625 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.970973 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" event={"ID":"57182814-f19c-4247-b774-5b01afe7d680","Type":"ContainerStarted","Data":"6ab54ca6e475b08df289e26cd6bb18b8d4039cf9567565d0273ff17ccc100778"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.971081 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.972228 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" event={"ID":"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5","Type":"ContainerStarted","Data":"c4d1cddb28f278d7d9afc26391833d5302f4ddc46482cee45de37e827002e451"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.972271 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.973993 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" event={"ID":"58fdba15-e8ba-47fa-aca8-90f638577a6b","Type":"ContainerStarted","Data":"2fd38171c0d1926aad9ba6835a1fb5aaa6aa3b269a83f6f8e210414e82169370"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.974039 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.975434 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" event={"ID":"db4c21b1-de25-4c17-a3c3-e6eea4044d77","Type":"ContainerStarted","Data":"c6ad7b28f7d0ac80d1d9f420bca0bae6112149c0b9e189ac28138b532b79c418"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.975556 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.976927 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" event={"ID":"e60d05a5-d1d5-4959-843b-654aaf547bca","Type":"ContainerStarted","Data":"e636c3bd4493f5cc371b6e3ae4b39a12b6ae05121102784779192c0b8eda170f"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.977067 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.978686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" event={"ID":"5eae4c51-3e86-4153-8c26-d4c51b2f1331","Type":"ContainerStarted","Data":"e10bb30f584ddb06bce9f710da34bc256dcb85f408e23e8cb4b504ab886a817a"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.979112 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.980603 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" event={"ID":"070f7ba5-a528-4316-8484-4ea82fb70a40","Type":"ContainerStarted","Data":"445e499399c9c0f74c879bbd7cd9a7e89244e3750e34a5c49e1fd0a422fbcf23"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.981004 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.981853 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" event={"ID":"c07420af-b163-4ab6-8a1c-5e697629cab0","Type":"ContainerStarted","Data":"d83d2de473c6e62a6746b1fb409635a375cad2f96da70d20806c151c8c4b6e53"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.982656 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" event={"ID":"0e576db6-d246-4a03-a2bd-8cbd7f7526fd","Type":"ContainerStarted","Data":"90a4940e8e1a9d0a8c1ebd564d6cc201e895e5384face97b0506bbbe363f1a35"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.983769 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" event={"ID":"c44d3483-738b-4aab-a4a2-1478480b6330","Type":"ContainerStarted","Data":"7309b185c876e9809be7940af27c030bba7877ce99cb0a77ea2e07b328b78420"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.984105 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.985340 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" event={"ID":"05642ba7-89bd-4d72-a31b-4e6d4532923e","Type":"ContainerStarted","Data":"93a85a022c8d5e0d4a167e133fad10b4cf307fc6f29b32c91320dd3e4f8ab301"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.985690 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.119764 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" podStartSLOduration=6.835652226 podStartE2EDuration="26.119748714s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:56.763488836 +0000 UTC m=+832.573319694" lastFinishedPulling="2026-01-20 15:04:16.047585304 +0000 UTC m=+851.857416182" observedRunningTime="2026-01-20 15:04:21.106820501 +0000 UTC m=+856.916651369" watchObservedRunningTime="2026-01-20 15:04:21.119748714 +0000 UTC m=+856.929579572" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.121979 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" podStartSLOduration=3.510017165 podStartE2EDuration="26.121972561s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.520714331 +0000 UTC m=+833.330545189" lastFinishedPulling="2026-01-20 15:04:20.132669727 +0000 UTC m=+855.942500585" observedRunningTime="2026-01-20 15:04:21.013653277 +0000 UTC m=+856.823484135" watchObservedRunningTime="2026-01-20 15:04:21.121972561 +0000 UTC m=+856.931803419" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.247735 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" podStartSLOduration=6.953341745 podStartE2EDuration="26.247717595s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:56.753312657 +0000 UTC m=+832.563143515" lastFinishedPulling="2026-01-20 15:04:16.047688457 +0000 UTC m=+851.857519365" observedRunningTime="2026-01-20 15:04:21.191093287 +0000 UTC m=+857.000924145" watchObservedRunningTime="2026-01-20 15:04:21.247717595 +0000 UTC m=+857.057548443" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.307800 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" podStartSLOduration=3.6791616129999998 podStartE2EDuration="26.307784726s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.5207093 +0000 UTC m=+833.330540158" lastFinishedPulling="2026-01-20 15:04:20.149332413 +0000 UTC m=+855.959163271" observedRunningTime="2026-01-20 15:04:21.255644705 +0000 UTC m=+857.065475563" watchObservedRunningTime="2026-01-20 15:04:21.307784726 +0000 UTC m=+857.117615584" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.364393 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" podStartSLOduration=7.391482671 podStartE2EDuration="26.364372922s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.074715114 +0000 UTC m=+832.884545972" lastFinishedPulling="2026-01-20 15:04:16.047605325 +0000 UTC m=+851.857436223" observedRunningTime="2026-01-20 15:04:21.363482366 +0000 UTC m=+857.173313234" watchObservedRunningTime="2026-01-20 15:04:21.364372922 +0000 UTC m=+857.174203790" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.369211 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" podStartSLOduration=7.126922508 podStartE2EDuration="26.369188779s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.060838773 +0000 UTC m=+832.870669631" lastFinishedPulling="2026-01-20 15:04:16.303105004 +0000 UTC m=+852.112935902" observedRunningTime="2026-01-20 15:04:21.311094106 +0000 UTC m=+857.120924964" watchObservedRunningTime="2026-01-20 15:04:21.369188779 +0000 UTC m=+857.179019637" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.431536 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" podStartSLOduration=6.699414802 podStartE2EDuration="26.431498298s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.518660088 +0000 UTC m=+833.328490946" lastFinishedPulling="2026-01-20 15:04:17.250743584 +0000 UTC m=+853.060574442" observedRunningTime="2026-01-20 15:04:21.429953062 +0000 UTC m=+857.239783930" watchObservedRunningTime="2026-01-20 15:04:21.431498298 +0000 UTC m=+857.241329156" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.479680 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" podStartSLOduration=5.032997034 podStartE2EDuration="26.479664269s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.506963194 +0000 UTC m=+833.316794062" lastFinishedPulling="2026-01-20 15:04:18.953630439 +0000 UTC m=+854.763461297" observedRunningTime="2026-01-20 15:04:21.47145572 +0000 UTC m=+857.281286578" watchObservedRunningTime="2026-01-20 15:04:21.479664269 +0000 UTC m=+857.289495127" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.518563 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" podStartSLOduration=3.9645827000000002 podStartE2EDuration="26.518545988s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.528252259 +0000 UTC m=+833.338083117" lastFinishedPulling="2026-01-20 15:04:20.082215547 +0000 UTC m=+855.892046405" observedRunningTime="2026-01-20 15:04:21.518206068 +0000 UTC m=+857.328036926" watchObservedRunningTime="2026-01-20 15:04:21.518545988 +0000 UTC m=+857.328376846" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.553734 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" podStartSLOduration=3.213163029 podStartE2EDuration="25.553709535s" podCreationTimestamp="2026-01-20 15:03:56 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.792171723 +0000 UTC m=+833.602002581" lastFinishedPulling="2026-01-20 15:04:20.132718229 +0000 UTC m=+855.942549087" observedRunningTime="2026-01-20 15:04:21.552785557 +0000 UTC m=+857.362616415" watchObservedRunningTime="2026-01-20 15:04:21.553709535 +0000 UTC m=+857.363540393" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.642285 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" podStartSLOduration=6.892898452 podStartE2EDuration="26.6422612s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.074083276 +0000 UTC m=+832.883914144" lastFinishedPulling="2026-01-20 15:04:16.823446034 +0000 UTC m=+852.633276892" observedRunningTime="2026-01-20 15:04:21.590072748 +0000 UTC m=+857.399903606" watchObservedRunningTime="2026-01-20 15:04:21.6422612 +0000 UTC m=+857.452092058" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.659855 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" podStartSLOduration=4.056169998 podStartE2EDuration="26.659835813s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.523690861 +0000 UTC m=+833.333521719" lastFinishedPulling="2026-01-20 15:04:20.127356686 +0000 UTC m=+855.937187534" observedRunningTime="2026-01-20 15:04:21.624005606 +0000 UTC m=+857.433836464" watchObservedRunningTime="2026-01-20 15:04:21.659835813 +0000 UTC m=+857.469666671" Jan 20 15:04:23 crc kubenswrapper[4949]: I0120 15:04:23.001550 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" event={"ID":"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e","Type":"ContainerStarted","Data":"d45ab7ebb9c179b5b8a5cd81e560f446a20fe915bb946db40915f72e64458018"} Jan 20 15:04:23 crc kubenswrapper[4949]: I0120 15:04:23.002122 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:04:23 crc kubenswrapper[4949]: I0120 15:04:23.021254 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" podStartSLOduration=3.276943016 podStartE2EDuration="28.021234982s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.512413649 +0000 UTC m=+833.322244507" lastFinishedPulling="2026-01-20 15:04:22.256705615 +0000 UTC m=+858.066536473" observedRunningTime="2026-01-20 15:04:23.021147239 +0000 UTC m=+858.830978097" watchObservedRunningTime="2026-01-20 15:04:23.021234982 +0000 UTC m=+858.831065840" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.018357 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" event={"ID":"c07420af-b163-4ab6-8a1c-5e697629cab0","Type":"ContainerStarted","Data":"ae0a5761819ed76fb8b7ff7b6c18bad2c628a5d523bbd3647416af6f387c45f0"} Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.019045 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.019658 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" event={"ID":"0e576db6-d246-4a03-a2bd-8cbd7f7526fd","Type":"ContainerStarted","Data":"17a4498a9b4b01078dd9202d605c1d0c7824002468986371dc999bf118974b72"} Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.020134 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.021480 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" event={"ID":"a87686a4-1af3-4d05-ac2d-15551c80e0d7","Type":"ContainerStarted","Data":"f516cce3a7a0c46b062f69e9ed53470d46cd98d0789f629b710d90511f559c02"} Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.022314 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.050643 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" podStartSLOduration=26.524288613 podStartE2EDuration="30.050613979s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:04:20.488494119 +0000 UTC m=+856.298324987" lastFinishedPulling="2026-01-20 15:04:24.014819495 +0000 UTC m=+859.824650353" observedRunningTime="2026-01-20 15:04:25.049804225 +0000 UTC m=+860.859635083" watchObservedRunningTime="2026-01-20 15:04:25.050613979 +0000 UTC m=+860.860444857" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.078953 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" podStartSLOduration=26.626493022 podStartE2EDuration="30.078930717s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:04:20.582726807 +0000 UTC m=+856.392557665" lastFinishedPulling="2026-01-20 15:04:24.035164502 +0000 UTC m=+859.844995360" observedRunningTime="2026-01-20 15:04:25.076222515 +0000 UTC m=+860.886053373" watchObservedRunningTime="2026-01-20 15:04:25.078930717 +0000 UTC m=+860.888761585" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.092430 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" podStartSLOduration=2.814153241 podStartE2EDuration="30.092412077s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.461750062 +0000 UTC m=+833.271580920" lastFinishedPulling="2026-01-20 15:04:24.740008898 +0000 UTC m=+860.549839756" observedRunningTime="2026-01-20 15:04:25.088175038 +0000 UTC m=+860.898005896" watchObservedRunningTime="2026-01-20 15:04:25.092412077 +0000 UTC m=+860.902242935" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.907146 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.927027 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.946770 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.984732 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.029113 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" event={"ID":"d6706563-2c93-414e-bb49-cd74ae82d235","Type":"ContainerStarted","Data":"f01a1f74db045341f4d6df0e5d64a628d6d983970a7fab30d0b85632ec3cc6aa"} Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.030647 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.046574 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" podStartSLOduration=3.106384233 podStartE2EDuration="31.046558484s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.400723762 +0000 UTC m=+833.210554620" lastFinishedPulling="2026-01-20 15:04:25.340898013 +0000 UTC m=+861.150728871" observedRunningTime="2026-01-20 15:04:26.044163392 +0000 UTC m=+861.853994250" watchObservedRunningTime="2026-01-20 15:04:26.046558484 +0000 UTC m=+861.856389342" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.076617 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.248168 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.350742 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.382693 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.508970 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.528085 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.585996 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.636455 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.469641 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.477844 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.536701 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nhvd5" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.548724 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.862969 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv"] Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.050094 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" event={"ID":"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291","Type":"ContainerStarted","Data":"dc93a4086d275f66b2d3cdf51e900f1fa48c7d8db1f42c04d3a965b1b23ec0ce"} Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.050650 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.051238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" event={"ID":"ec1b1a5b-0d86-40b4-9410-397d183776d0","Type":"ContainerStarted","Data":"79441e48caab8cd33138b2dd3b4e69555fca5b12fcb07c4d797859b670121d98"} Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.051266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" event={"ID":"ec1b1a5b-0d86-40b4-9410-397d183776d0","Type":"ContainerStarted","Data":"d5f0682db8138d307e75a84138080c173f40c4ce6215954384a5ea982cb2b51c"} Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.051362 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.064718 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" podStartSLOduration=2.962822239 podStartE2EDuration="34.064697449s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.431420723 +0000 UTC m=+833.241251581" lastFinishedPulling="2026-01-20 15:04:28.533295923 +0000 UTC m=+864.343126791" observedRunningTime="2026-01-20 15:04:29.063329817 +0000 UTC m=+864.873160675" watchObservedRunningTime="2026-01-20 15:04:29.064697449 +0000 UTC m=+864.874528307" Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.085980 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" podStartSLOduration=33.085962093 podStartE2EDuration="33.085962093s" podCreationTimestamp="2026-01-20 15:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:04:29.0852097 +0000 UTC m=+864.895040558" watchObservedRunningTime="2026-01-20 15:04:29.085962093 +0000 UTC m=+864.895792961" Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.062694 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" event={"ID":"070a47eb-d68f-4208-86eb-a99f0a9ce5df","Type":"ContainerStarted","Data":"6907e13b68ff1bf28937db41551815358fc500e7e5c3a0c38225d154de275643"} Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.063502 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.065075 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" event={"ID":"63acb80f-21b4-4255-af60-03a68dd07658","Type":"ContainerStarted","Data":"55d53ff82c9a5207b70f53b4b1c25c226424378c338a279f7caf4b313f79f5df"} Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.065344 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.081019 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" podStartSLOduration=2.534014374 podStartE2EDuration="35.081003601s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.010001252 +0000 UTC m=+832.819832100" lastFinishedPulling="2026-01-20 15:04:29.556990459 +0000 UTC m=+865.366821327" observedRunningTime="2026-01-20 15:04:30.075143754 +0000 UTC m=+865.884974612" watchObservedRunningTime="2026-01-20 15:04:30.081003601 +0000 UTC m=+865.890834459" Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.103309 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" podStartSLOduration=2.672932385 podStartE2EDuration="34.103287777s" podCreationTimestamp="2026-01-20 15:03:56 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.635534403 +0000 UTC m=+833.445365261" lastFinishedPulling="2026-01-20 15:04:29.065889795 +0000 UTC m=+864.875720653" observedRunningTime="2026-01-20 15:04:30.09679347 +0000 UTC m=+865.906624328" watchObservedRunningTime="2026-01-20 15:04:30.103287777 +0000 UTC m=+865.913118645" Jan 20 15:04:31 crc kubenswrapper[4949]: I0120 15:04:31.982936 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:32 crc kubenswrapper[4949]: I0120 15:04:32.043985 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:35 crc kubenswrapper[4949]: I0120 15:04:35.938966 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.136751 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.207396 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.275710 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.398544 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.582867 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:04:38 crc kubenswrapper[4949]: I0120 15:04:38.555354 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:41 crc kubenswrapper[4949]: I0120 15:04:41.137201 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" event={"ID":"d770793b-0e56-43cc-9707-5d062b8f7c82","Type":"ContainerStarted","Data":"2f340b5963a8d2c677be121f00de85ce001cbb8e921165a80c56312993aca9ce"} Jan 20 15:04:43 crc kubenswrapper[4949]: I0120 15:04:43.212440 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" podStartSLOduration=13.36678752 podStartE2EDuration="47.212408031s" podCreationTimestamp="2026-01-20 15:03:56 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.790169873 +0000 UTC m=+833.600000741" lastFinishedPulling="2026-01-20 15:04:31.635790394 +0000 UTC m=+867.445621252" observedRunningTime="2026-01-20 15:04:43.204002796 +0000 UTC m=+879.013833694" watchObservedRunningTime="2026-01-20 15:04:43.212408031 +0000 UTC m=+879.022238929" Jan 20 15:04:48 crc kubenswrapper[4949]: I0120 15:04:48.221709 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" event={"ID":"728be0e4-4dde-4f00-be4f-af6590d7025b","Type":"ContainerStarted","Data":"1186b2508bc643bf6d983a6dcf5b704e772e7280531a671157d0262183a61315"} Jan 20 15:04:48 crc kubenswrapper[4949]: I0120 15:04:48.222728 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:04:48 crc kubenswrapper[4949]: I0120 15:04:48.249080 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" podStartSLOduration=3.282538115 podStartE2EDuration="53.249061673s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.400442653 +0000 UTC m=+833.210273511" lastFinishedPulling="2026-01-20 15:04:47.366966181 +0000 UTC m=+883.176797069" observedRunningTime="2026-01-20 15:04:48.244814405 +0000 UTC m=+884.054645273" watchObservedRunningTime="2026-01-20 15:04:48.249061673 +0000 UTC m=+884.058892531" Jan 20 15:04:56 crc kubenswrapper[4949]: I0120 15:04:56.352347 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:04:57 crc kubenswrapper[4949]: I0120 15:04:57.152864 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:04:57 crc kubenswrapper[4949]: I0120 15:04:57.152989 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.649742 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.651840 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.654448 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.654559 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.654463 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.654596 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tnp5c" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.668787 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.694739 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.697045 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.699283 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.708578 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724358 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724426 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724610 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826073 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826156 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826187 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826218 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826283 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.827698 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.827727 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.828288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.846234 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.846546 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.977816 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.020089 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.274802 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.285905 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.307359 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:13 crc kubenswrapper[4949]: W0120 15:05:13.315536 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9929c1a_9656_4f9b_b7e0_b86b7e1f5ce1.slice/crio-67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3 WatchSource:0}: Error finding container 67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3: Status 404 returned error can't find the container with id 67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3 Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.425579 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" event={"ID":"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1","Type":"ContainerStarted","Data":"67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3"} Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.428438 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" event={"ID":"11aac808-7998-48bc-b54a-75b207b8a12b","Type":"ContainerStarted","Data":"0e4baeea71ced942bdc3203d8a06e8b6e52347327c3fa43612565ec346c789be"} Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.589413 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.615490 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.617042 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.629665 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.669481 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.669960 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.669996 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.771273 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.771334 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.771367 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.772396 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.773204 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.822017 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.935464 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.955537 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.957949 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.963895 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.973278 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.973327 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.973391 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.982693 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.074142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.074199 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.074271 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.075176 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.075502 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.110461 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.303295 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.530112 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:05:16 crc kubenswrapper[4949]: W0120 15:05:16.552372 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76c41597_7a3e_40c0_91d3_a73771874abe.slice/crio-d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd WatchSource:0}: Error finding container d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd: Status 404 returned error can't find the container with id d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.780356 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.781415 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.784976 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785385 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785406 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785241 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785499 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785554 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cpjq5" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785483 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.803413 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.808633 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.893738 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.893985 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894011 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894040 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894071 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894183 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894288 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894309 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894349 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.995937 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.995974 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996013 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996034 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996054 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996075 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996188 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996211 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996263 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996540 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.997214 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.997328 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.997899 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.998129 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.999117 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.003072 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.003113 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.051224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.054952 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.077044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.088908 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.093544 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.098769 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.103705 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.104037 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.104177 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.104433 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2fdrl" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.104670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.106430 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.107765 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.117252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.158183 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.202687 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204075 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204246 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204344 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204508 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204747 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204863 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204901 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204974 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.310732 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.310987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311041 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311058 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311079 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311095 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311126 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311144 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311163 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311185 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311635 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.312004 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.312179 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.312662 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.313152 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.313638 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.319327 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.319377 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.320188 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.326306 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.335175 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.349877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.442992 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.472059 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" event={"ID":"76c41597-7a3e-40c0-91d3-a73771874abe","Type":"ContainerStarted","Data":"d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd"} Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.474966 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" event={"ID":"a79c257a-a3a3-4db1-8f46-a0a499808dbf","Type":"ContainerStarted","Data":"d101eef9e73f679d6e83da351b32371512e88f55aedbbee4bdb3b09d5a79f5d8"} Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.673898 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:05:17 crc kubenswrapper[4949]: W0120 15:05:17.691345 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf4b5f65_52fe_4e8b_9d12_817e94e9b629.slice/crio-3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57 WatchSource:0}: Error finding container 3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57: Status 404 returned error can't find the container with id 3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57 Jan 20 15:05:17 crc kubenswrapper[4949]: W0120 15:05:17.912456 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c1f546_0796_457f_8b06_a5ffd11e1b36.slice/crio-554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a WatchSource:0}: Error finding container 554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a: Status 404 returned error can't find the container with id 554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.912841 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.138948 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.140031 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.142737 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-pbjm2" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.143140 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.143268 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.143833 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.152217 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.154239 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235550 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235599 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfwm\" (UniqueName: \"kubernetes.io/projected/ee020527-9591-42dc-b000-3153caede9cf-kube-api-access-6lfwm\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235630 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235651 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee020527-9591-42dc-b000-3153caede9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235678 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235863 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235921 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.236033 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337026 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337111 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfwm\" (UniqueName: \"kubernetes.io/projected/ee020527-9591-42dc-b000-3153caede9cf-kube-api-access-6lfwm\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337164 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337181 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee020527-9591-42dc-b000-3153caede9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337207 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337620 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337637 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.338614 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.338739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.339266 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.339508 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee020527-9591-42dc-b000-3153caede9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.340067 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.345378 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.352560 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.359264 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfwm\" (UniqueName: \"kubernetes.io/projected/ee020527-9591-42dc-b000-3153caede9cf-kube-api-access-6lfwm\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.368711 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.468381 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.497043 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerStarted","Data":"554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a"} Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.499575 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerStarted","Data":"3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57"} Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.435302 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.436587 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.448561 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.448716 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.449032 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6lsr6" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.450712 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.462052 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.557746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.557851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dk7c\" (UniqueName: \"kubernetes.io/projected/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kube-api-access-4dk7c\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.557913 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.557934 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.558076 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.558196 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.558247 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.558435 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.659914 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.659988 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660038 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dk7c\" (UniqueName: \"kubernetes.io/projected/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kube-api-access-4dk7c\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660077 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660108 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660140 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660200 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660786 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.663584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.664582 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.665177 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.666190 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.696086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.717010 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.724196 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dk7c\" (UniqueName: \"kubernetes.io/projected/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kube-api-access-4dk7c\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.762887 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.773559 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.774551 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.784942 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.785509 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.794946 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.795123 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.827217 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4xkkx" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895139 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb74p\" (UniqueName: \"kubernetes.io/projected/485725f6-91f1-413b-89f5-21bde785bd94-kube-api-access-kb74p\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895232 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-combined-ca-bundle\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895267 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-kolla-config\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895342 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-memcached-tls-certs\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-config-data\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997062 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-memcached-tls-certs\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997126 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-config-data\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997175 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb74p\" (UniqueName: \"kubernetes.io/projected/485725f6-91f1-413b-89f5-21bde785bd94-kube-api-access-kb74p\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997220 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-combined-ca-bundle\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997244 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-kolla-config\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997956 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-kolla-config\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.999026 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-config-data\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.004729 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-combined-ca-bundle\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.005082 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-memcached-tls-certs\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.027283 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb74p\" (UniqueName: \"kubernetes.io/projected/485725f6-91f1-413b-89f5-21bde785bd94-kube-api-access-kb74p\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.067008 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 20 15:05:20 crc kubenswrapper[4949]: W0120 15:05:20.071917 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee020527_9591_42dc_b000_3153caede9cf.slice/crio-c31af966204fdef1efb3fd3f8d15d6d62278f7562a7202a66379547f2ce5363e WatchSource:0}: Error finding container c31af966204fdef1efb3fd3f8d15d6d62278f7562a7202a66379547f2ce5363e: Status 404 returned error can't find the container with id c31af966204fdef1efb3fd3f8d15d6d62278f7562a7202a66379547f2ce5363e Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.128468 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.357797 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.408056 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.541475 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"485725f6-91f1-413b-89f5-21bde785bd94","Type":"ContainerStarted","Data":"ea46fccc499c4238859782b49a52a77a8fb6eabb8902db29b5a5b0b74dbaf84b"} Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.544699 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee020527-9591-42dc-b000-3153caede9cf","Type":"ContainerStarted","Data":"c31af966204fdef1efb3fd3f8d15d6d62278f7562a7202a66379547f2ce5363e"} Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.546883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f03e93a7-24b6-499c-89bc-1bf3e67221a6","Type":"ContainerStarted","Data":"a76fa3941f969127afd41ade63807bce736ad8187b911dee25cdb85411f3f7cf"} Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.648658 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.649557 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.651807 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bv6d5" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.668910 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.723185 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") pod \"kube-state-metrics-0\" (UID: \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\") " pod="openstack/kube-state-metrics-0" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.824684 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") pod \"kube-state-metrics-0\" (UID: \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\") " pod="openstack/kube-state-metrics-0" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.845375 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") pod \"kube-state-metrics-0\" (UID: \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\") " pod="openstack/kube-state-metrics-0" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.969191 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:05:22 crc kubenswrapper[4949]: I0120 15:05:22.510615 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:05:24 crc kubenswrapper[4949]: I0120 15:05:24.861408 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:05:24 crc kubenswrapper[4949]: I0120 15:05:24.863537 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:24 crc kubenswrapper[4949]: I0120 15:05:24.893600 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.001590 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.001769 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.001824 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.102760 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.102804 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.102837 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.103211 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.103463 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.121795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.190009 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.584349 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290","Type":"ContainerStarted","Data":"b6f194539b862d0ee8b6be35de75344541fb71d8b75e2a6809fe23930f272acc"} Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.858594 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nqhh2"] Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.859503 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.871976 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.872003 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.873377 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dk9sh" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.874275 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqhh2"] Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.897041 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kbnxn"] Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.903718 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.919117 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kbnxn"] Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015305 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-etc-ovs\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc6b\" (UniqueName: \"kubernetes.io/projected/bce99786-819a-47cc-8ad7-0c5581f034fa-kube-api-access-7rc6b\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015444 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-log\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015479 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-run\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015507 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-combined-ca-bundle\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015587 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-log-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015636 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkqbp\" (UniqueName: \"kubernetes.io/projected/c4179fca-4378-4347-a519-96120d9ae1cc-kube-api-access-fkqbp\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015665 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-lib\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015750 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-ovn-controller-tls-certs\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015862 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015922 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bce99786-819a-47cc-8ad7-0c5581f034fa-scripts\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015957 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4179fca-4378-4347-a519-96120d9ae1cc-scripts\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.016024 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.117883 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.117956 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-etc-ovs\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc6b\" (UniqueName: \"kubernetes.io/projected/bce99786-819a-47cc-8ad7-0c5581f034fa-kube-api-access-7rc6b\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118038 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-log\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-run\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118087 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-combined-ca-bundle\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118107 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-log-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118126 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkqbp\" (UniqueName: \"kubernetes.io/projected/c4179fca-4378-4347-a519-96120d9ae1cc-kube-api-access-fkqbp\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118146 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-lib\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118174 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-ovn-controller-tls-certs\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118245 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bce99786-819a-47cc-8ad7-0c5581f034fa-scripts\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118276 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4179fca-4378-4347-a519-96120d9ae1cc-scripts\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118502 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-etc-ovs\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118651 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-lib\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118700 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-log-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-log\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118910 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-run\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.119284 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.119419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.120936 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4179fca-4378-4347-a519-96120d9ae1cc-scripts\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.131363 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bce99786-819a-47cc-8ad7-0c5581f034fa-scripts\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.132379 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-combined-ca-bundle\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.133833 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc6b\" (UniqueName: \"kubernetes.io/projected/bce99786-819a-47cc-8ad7-0c5581f034fa-kube-api-access-7rc6b\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.134420 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkqbp\" (UniqueName: \"kubernetes.io/projected/c4179fca-4378-4347-a519-96120d9ae1cc-kube-api-access-fkqbp\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.135297 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-ovn-controller-tls-certs\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.205090 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.228846 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.738712 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.753115 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.756388 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.756683 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-psvcg" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.757224 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.767951 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.767981 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.769641 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834255 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834367 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834408 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834436 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834465 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlcvg\" (UniqueName: \"kubernetes.io/projected/ab38c923-ec3b-400d-864a-c5e8a0d53999-kube-api-access-hlcvg\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834543 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834598 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936715 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936799 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.937005 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlcvg\" (UniqueName: \"kubernetes.io/projected/ab38c923-ec3b-400d-864a-c5e8a0d53999-kube-api-access-hlcvg\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.937071 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.937223 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.937681 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.938717 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.948869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.949346 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.952061 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.956198 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlcvg\" (UniqueName: \"kubernetes.io/projected/ab38c923-ec3b-400d-864a-c5e8a0d53999-kube-api-access-hlcvg\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.958117 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.959733 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.092067 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.153017 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.153071 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.232239 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.239724 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.245132 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.342400 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.342578 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.342616 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.443827 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.443874 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.443915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.444639 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.444940 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.467573 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.562345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.359986 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.361263 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.367991 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7lpvk" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.368017 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.368132 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.369319 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.403954 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471656 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f8cg\" (UniqueName: \"kubernetes.io/projected/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-kube-api-access-6f8cg\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471817 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-config\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471841 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471880 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471905 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573442 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573504 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573587 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573620 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573648 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8cg\" (UniqueName: \"kubernetes.io/projected/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-kube-api-access-6f8cg\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573673 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573691 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-config\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573714 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.574696 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.574881 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.575793 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.575827 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-config\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.580501 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.581639 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.581775 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.594127 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8cg\" (UniqueName: \"kubernetes.io/projected/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-kube-api-access-6f8cg\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.598934 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.716897 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.643680 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.644324 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dck96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-dwl6v_openstack(a79c257a-a3a3-4db1-8f46-a0a499808dbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.645463 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" podUID="a79c257a-a3a3-4db1-8f46-a0a499808dbf" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.676994 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" podUID="a79c257a-a3a3-4db1-8f46-a0a499808dbf" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.702901 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.703075 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqgbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-w9f28_openstack(76c41597-7a3e-40c0-91d3-a73771874abe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.704270 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" Jan 20 15:05:36 crc kubenswrapper[4949]: E0120 15:05:36.681365 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.217386 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.217787 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbtn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-c7tfd_openstack(c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.219033 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" podUID="c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.271016 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.275256 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhxf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bvzqr_openstack(11aac808-7998-48bc-b54a-75b207b8a12b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.279104 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" podUID="11aac808-7998-48bc-b54a-75b207b8a12b" Jan 20 15:05:37 crc kubenswrapper[4949]: I0120 15:05:37.938499 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:05:37 crc kubenswrapper[4949]: I0120 15:05:37.944063 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqhh2"] Jan 20 15:05:37 crc kubenswrapper[4949]: I0120 15:05:37.959315 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.055822 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.134416 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kbnxn"] Jan 20 15:05:38 crc kubenswrapper[4949]: W0120 15:05:38.168452 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe70405_ca2b_4d54_9b46_c798b4ff8583.slice/crio-eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7 WatchSource:0}: Error finding container eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7: Status 404 returned error can't find the container with id eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7 Jan 20 15:05:38 crc kubenswrapper[4949]: W0120 15:05:38.177228 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbce99786_819a_47cc_8ad7_0c5581f034fa.slice/crio-0c6ab1decee349eeac21eebacdf030a4e2f976892468dae3bfc0b721c80ab10d WatchSource:0}: Error finding container 0c6ab1decee349eeac21eebacdf030a4e2f976892468dae3bfc0b721c80ab10d: Status 404 returned error can't find the container with id 0c6ab1decee349eeac21eebacdf030a4e2f976892468dae3bfc0b721c80ab10d Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.228926 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.235891 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342387 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") pod \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342788 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") pod \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342842 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") pod \"11aac808-7998-48bc-b54a-75b207b8a12b\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342909 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") pod \"11aac808-7998-48bc-b54a-75b207b8a12b\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342995 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") pod \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.343371 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config" (OuterVolumeSpecName: "config") pod "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" (UID: "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.343625 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.343637 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config" (OuterVolumeSpecName: "config") pod "11aac808-7998-48bc-b54a-75b207b8a12b" (UID: "11aac808-7998-48bc-b54a-75b207b8a12b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.344272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" (UID: "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.356151 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6" (OuterVolumeSpecName: "kube-api-access-hbtn6") pod "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" (UID: "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1"). InnerVolumeSpecName "kube-api-access-hbtn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.356347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4" (OuterVolumeSpecName: "kube-api-access-mhxf4") pod "11aac808-7998-48bc-b54a-75b207b8a12b" (UID: "11aac808-7998-48bc-b54a-75b207b8a12b"). InnerVolumeSpecName "kube-api-access-mhxf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.445805 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.445841 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.445850 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.445859 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.660125 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.707688 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee020527-9591-42dc-b000-3153caede9cf","Type":"ContainerStarted","Data":"8994dbe5d18b13b8b627eb0e0b8e2db7be8fe96864e521cb6ae9251c5b7d8268"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.709939 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" event={"ID":"11aac808-7998-48bc-b54a-75b207b8a12b","Type":"ContainerDied","Data":"0e4baeea71ced942bdc3203d8a06e8b6e52347327c3fa43612565ec346c789be"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.710007 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.712102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2" event={"ID":"c4179fca-4378-4347-a519-96120d9ae1cc","Type":"ContainerStarted","Data":"ee99f12c2d22a1480e7d18e7e8bb90000463389249ff63a5339628c3fbdaddeb"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.715230 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17c9cb64-1ff5-4087-b424-1c2bb7398ba0","Type":"ContainerStarted","Data":"e16506a9c1df886d9ba8a98349c8641329b795adc65c8bff9999d7fff4b787e3"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.718079 4949 generic.go:334] "Generic (PLEG): container finished" podID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerID="f03665b194c4174cebb25646bd720102812b8ac22b08bd892bcbfae2b602d925" exitCode=0 Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.718131 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerDied","Data":"f03665b194c4174cebb25646bd720102812b8ac22b08bd892bcbfae2b602d925"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.718150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerStarted","Data":"68cb870c434ab55233ed72365d1bc78370679ae532604bfe367507e2c57caf3a"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.721148 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kbnxn" event={"ID":"bce99786-819a-47cc-8ad7-0c5581f034fa","Type":"ContainerStarted","Data":"0c6ab1decee349eeac21eebacdf030a4e2f976892468dae3bfc0b721c80ab10d"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.725165 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerStarted","Data":"eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.734620 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.734679 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" event={"ID":"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1","Type":"ContainerDied","Data":"67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.740031 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"485725f6-91f1-413b-89f5-21bde785bd94","Type":"ContainerStarted","Data":"a14d10d85d043b261760ba75f7325ee1eef372ddfaf1e2f43ad87e9041368654"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.751082 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.777638 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.980137437 podStartE2EDuration="19.777618199s" podCreationTimestamp="2026-01-20 15:05:19 +0000 UTC" firstStartedPulling="2026-01-20 15:05:20.445789749 +0000 UTC m=+916.255620607" lastFinishedPulling="2026-01-20 15:05:37.243270511 +0000 UTC m=+933.053101369" observedRunningTime="2026-01-20 15:05:38.773336872 +0000 UTC m=+934.583167730" watchObservedRunningTime="2026-01-20 15:05:38.777618199 +0000 UTC m=+934.587449067" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.826223 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.827152 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.843847 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.849207 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:39 crc kubenswrapper[4949]: W0120 15:05:39.126986 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab38c923_ec3b_400d_864a_c5e8a0d53999.slice/crio-e26a41da0f055cebb48a67cc90b8938e2b8ca0bb75248cf1927be618cd1d71bc WatchSource:0}: Error finding container e26a41da0f055cebb48a67cc90b8938e2b8ca0bb75248cf1927be618cd1d71bc: Status 404 returned error can't find the container with id e26a41da0f055cebb48a67cc90b8938e2b8ca0bb75248cf1927be618cd1d71bc Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.749160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f03e93a7-24b6-499c-89bc-1bf3e67221a6","Type":"ContainerStarted","Data":"0e51d1dab36c13e13a0a4d54e015bfc45c6c88e1be2b3744ce7393168375f2b7"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.751814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerStarted","Data":"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.754594 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab38c923-ec3b-400d-864a-c5e8a0d53999","Type":"ContainerStarted","Data":"e26a41da0f055cebb48a67cc90b8938e2b8ca0bb75248cf1927be618cd1d71bc"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.756229 4949 generic.go:334] "Generic (PLEG): container finished" podID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerID="7116bccdb347550321602be8ab7c8a5038e543ed30d76d1e6cf7ae23a1c0748e" exitCode=0 Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.756289 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerDied","Data":"7116bccdb347550321602be8ab7c8a5038e543ed30d76d1e6cf7ae23a1c0748e"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.759319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290","Type":"ContainerStarted","Data":"62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.759497 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.839599 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.949972483 podStartE2EDuration="18.839578711s" podCreationTimestamp="2026-01-20 15:05:21 +0000 UTC" firstStartedPulling="2026-01-20 15:05:25.299728678 +0000 UTC m=+921.109559526" lastFinishedPulling="2026-01-20 15:05:39.189334886 +0000 UTC m=+934.999165754" observedRunningTime="2026-01-20 15:05:39.838185547 +0000 UTC m=+935.648016405" watchObservedRunningTime="2026-01-20 15:05:39.839578711 +0000 UTC m=+935.649409569" Jan 20 15:05:40 crc kubenswrapper[4949]: I0120 15:05:40.768544 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerStarted","Data":"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c"} Jan 20 15:05:40 crc kubenswrapper[4949]: I0120 15:05:40.805706 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11aac808-7998-48bc-b54a-75b207b8a12b" path="/var/lib/kubelet/pods/11aac808-7998-48bc-b54a-75b207b8a12b/volumes" Jan 20 15:05:40 crc kubenswrapper[4949]: I0120 15:05:40.806103 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" path="/var/lib/kubelet/pods/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1/volumes" Jan 20 15:05:42 crc kubenswrapper[4949]: I0120 15:05:42.785195 4949 generic.go:334] "Generic (PLEG): container finished" podID="ee020527-9591-42dc-b000-3153caede9cf" containerID="8994dbe5d18b13b8b627eb0e0b8e2db7be8fe96864e521cb6ae9251c5b7d8268" exitCode=0 Jan 20 15:05:42 crc kubenswrapper[4949]: I0120 15:05:42.785866 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee020527-9591-42dc-b000-3153caede9cf","Type":"ContainerDied","Data":"8994dbe5d18b13b8b627eb0e0b8e2db7be8fe96864e521cb6ae9251c5b7d8268"} Jan 20 15:05:42 crc kubenswrapper[4949]: I0120 15:05:42.798908 4949 generic.go:334] "Generic (PLEG): container finished" podID="f03e93a7-24b6-499c-89bc-1bf3e67221a6" containerID="0e51d1dab36c13e13a0a4d54e015bfc45c6c88e1be2b3744ce7393168375f2b7" exitCode=0 Jan 20 15:05:42 crc kubenswrapper[4949]: I0120 15:05:42.805817 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f03e93a7-24b6-499c-89bc-1bf3e67221a6","Type":"ContainerDied","Data":"0e51d1dab36c13e13a0a4d54e015bfc45c6c88e1be2b3744ce7393168375f2b7"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.807617 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17c9cb64-1ff5-4087-b424-1c2bb7398ba0","Type":"ContainerStarted","Data":"aa8606627e14db1a2d93aeccc484e1113035ddc4843f25b4eaa277d98ca9bdf6"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.809670 4949 generic.go:334] "Generic (PLEG): container finished" podID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerID="97885d51078adaf7b0201e67e5028b4306ed2924b2eb0990ba98b4acc792105a" exitCode=0 Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.809735 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerDied","Data":"97885d51078adaf7b0201e67e5028b4306ed2924b2eb0990ba98b4acc792105a"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.811552 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab38c923-ec3b-400d-864a-c5e8a0d53999","Type":"ContainerStarted","Data":"4842decb917138d43aa39e41244a9b936cfb8a7c419bd4b10603536aea18dd88"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.823171 4949 generic.go:334] "Generic (PLEG): container finished" podID="bce99786-819a-47cc-8ad7-0c5581f034fa" containerID="d698d42622c6944052684003b7edbe49f368043aacc69933df75aba421a7adfc" exitCode=0 Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.823253 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kbnxn" event={"ID":"bce99786-819a-47cc-8ad7-0c5581f034fa","Type":"ContainerDied","Data":"d698d42622c6944052684003b7edbe49f368043aacc69933df75aba421a7adfc"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.829317 4949 generic.go:334] "Generic (PLEG): container finished" podID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerID="99255b61b7f0088c38e333002cd268cb5398ee7d7f296126fdae25ebda59cb81" exitCode=0 Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.829404 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerDied","Data":"99255b61b7f0088c38e333002cd268cb5398ee7d7f296126fdae25ebda59cb81"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.832883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee020527-9591-42dc-b000-3153caede9cf","Type":"ContainerStarted","Data":"7fd378f99940f01fbe8656237eac066146a9e0e0410c6b59b1fe0f0d8d2f10c9"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.835621 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f03e93a7-24b6-499c-89bc-1bf3e67221a6","Type":"ContainerStarted","Data":"d0e706391c1e92bb8858dd4b366b220476fd009e5378badc350054a7e6da12eb"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.847787 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2" event={"ID":"c4179fca-4378-4347-a519-96120d9ae1cc","Type":"ContainerStarted","Data":"813d117263ac666d2eb775e981e7bd4c19e098da4daf5c8f06c310935ae71d0f"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.847938 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.917823 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nqhh2" podStartSLOduration=14.239679705 podStartE2EDuration="18.917794122s" podCreationTimestamp="2026-01-20 15:05:25 +0000 UTC" firstStartedPulling="2026-01-20 15:05:38.005711384 +0000 UTC m=+933.815542242" lastFinishedPulling="2026-01-20 15:05:42.683825781 +0000 UTC m=+938.493656659" observedRunningTime="2026-01-20 15:05:43.904077261 +0000 UTC m=+939.713908129" watchObservedRunningTime="2026-01-20 15:05:43.917794122 +0000 UTC m=+939.727624990" Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.953341 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.485825911 podStartE2EDuration="26.953313354s" podCreationTimestamp="2026-01-20 15:05:17 +0000 UTC" firstStartedPulling="2026-01-20 15:05:20.076206487 +0000 UTC m=+915.886037345" lastFinishedPulling="2026-01-20 15:05:37.54369393 +0000 UTC m=+933.353524788" observedRunningTime="2026-01-20 15:05:43.92458536 +0000 UTC m=+939.734416218" watchObservedRunningTime="2026-01-20 15:05:43.953313354 +0000 UTC m=+939.763144212" Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.955350 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.371276129 podStartE2EDuration="25.955338599s" podCreationTimestamp="2026-01-20 15:05:18 +0000 UTC" firstStartedPulling="2026-01-20 15:05:20.383785817 +0000 UTC m=+916.193616675" lastFinishedPulling="2026-01-20 15:05:37.967848287 +0000 UTC m=+933.777679145" observedRunningTime="2026-01-20 15:05:43.951228386 +0000 UTC m=+939.761059264" watchObservedRunningTime="2026-01-20 15:05:43.955338599 +0000 UTC m=+939.765169447" Jan 20 15:05:44 crc kubenswrapper[4949]: I0120 15:05:44.861258 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kbnxn" event={"ID":"bce99786-819a-47cc-8ad7-0c5581f034fa","Type":"ContainerStarted","Data":"52d510ee9c74ef34881edcdb2e4eb447b39fa992cb5ffc9b736b70708e128356"} Jan 20 15:05:45 crc kubenswrapper[4949]: I0120 15:05:45.129957 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.887441 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17c9cb64-1ff5-4087-b424-1c2bb7398ba0","Type":"ContainerStarted","Data":"ef4a80dffe44bab4fa4b5cf30ae59a0e9bf0ef7a10071754727422ce9bca13be"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.889395 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerStarted","Data":"a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.890983 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab38c923-ec3b-400d-864a-c5e8a0d53999","Type":"ContainerStarted","Data":"8c550f8f9f911c2961719877c6aef13b282cbc03642823ffe9e74bd0bde55ee5"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.893698 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kbnxn" event={"ID":"bce99786-819a-47cc-8ad7-0c5581f034fa","Type":"ContainerStarted","Data":"b09db70819b05ffad6cf612985843206060f76b4eeb837540af6df28a5ab5c8b"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.893798 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.893903 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.895778 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerStarted","Data":"cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.919311 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.501357586 podStartE2EDuration="20.919295596s" podCreationTimestamp="2026-01-20 15:05:27 +0000 UTC" firstStartedPulling="2026-01-20 15:05:38.18415031 +0000 UTC m=+933.993981168" lastFinishedPulling="2026-01-20 15:05:46.60208831 +0000 UTC m=+942.411919178" observedRunningTime="2026-01-20 15:05:47.913724147 +0000 UTC m=+943.723555005" watchObservedRunningTime="2026-01-20 15:05:47.919295596 +0000 UTC m=+943.729126454" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.932089 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.451988898 podStartE2EDuration="22.932065416s" podCreationTimestamp="2026-01-20 15:05:25 +0000 UTC" firstStartedPulling="2026-01-20 15:05:39.134135411 +0000 UTC m=+934.943966269" lastFinishedPulling="2026-01-20 15:05:46.614211929 +0000 UTC m=+942.424042787" observedRunningTime="2026-01-20 15:05:47.928954927 +0000 UTC m=+943.738785785" watchObservedRunningTime="2026-01-20 15:05:47.932065416 +0000 UTC m=+943.741896264" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.950609 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qsqhq" podStartSLOduration=14.05141923 podStartE2EDuration="20.950584602s" podCreationTimestamp="2026-01-20 15:05:27 +0000 UTC" firstStartedPulling="2026-01-20 15:05:39.758764973 +0000 UTC m=+935.568595831" lastFinishedPulling="2026-01-20 15:05:46.657930345 +0000 UTC m=+942.467761203" observedRunningTime="2026-01-20 15:05:47.946837322 +0000 UTC m=+943.756668180" watchObservedRunningTime="2026-01-20 15:05:47.950584602 +0000 UTC m=+943.760415460" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.966959 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-825w7" podStartSLOduration=16.424811965 podStartE2EDuration="23.966937138s" podCreationTimestamp="2026-01-20 15:05:24 +0000 UTC" firstStartedPulling="2026-01-20 15:05:39.120867995 +0000 UTC m=+934.930698853" lastFinishedPulling="2026-01-20 15:05:46.662993168 +0000 UTC m=+942.472824026" observedRunningTime="2026-01-20 15:05:47.961369629 +0000 UTC m=+943.771200487" watchObservedRunningTime="2026-01-20 15:05:47.966937138 +0000 UTC m=+943.776767996" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.984700 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kbnxn" podStartSLOduration=18.512804911 podStartE2EDuration="22.984685289s" podCreationTimestamp="2026-01-20 15:05:25 +0000 UTC" firstStartedPulling="2026-01-20 15:05:38.184225412 +0000 UTC m=+933.994056270" lastFinishedPulling="2026-01-20 15:05:42.6561058 +0000 UTC m=+938.465936648" observedRunningTime="2026-01-20 15:05:47.981243928 +0000 UTC m=+943.791074786" watchObservedRunningTime="2026-01-20 15:05:47.984685289 +0000 UTC m=+943.794516147" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.093183 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.137681 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.468888 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.469265 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.539766 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.718107 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.907956 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.962904 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.013686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.170002 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.207084 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.213998 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.216020 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.226863 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.234025 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.234086 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.234159 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.234196 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.304560 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q26vt"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.306287 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.329882 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q26vt"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.330042 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338134 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338211 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-combined-ca-bundle\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovs-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338314 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k78pt\" (UniqueName: \"kubernetes.io/projected/f4968375-00d3-4db1-93b4-db0808c464b2-kube-api-access-k78pt\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338388 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338408 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338429 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovn-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338450 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4968375-00d3-4db1-93b4-db0808c464b2-config\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.339272 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.339825 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.341082 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.383221 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-combined-ca-bundle\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456614 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovs-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456658 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456685 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k78pt\" (UniqueName: \"kubernetes.io/projected/f4968375-00d3-4db1-93b4-db0808c464b2-kube-api-access-k78pt\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456755 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovn-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456786 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4968375-00d3-4db1-93b4-db0808c464b2-config\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.457808 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4968375-00d3-4db1-93b4-db0808c464b2-config\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.458012 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovs-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.458166 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovn-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.467157 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.467380 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-combined-ca-bundle\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.475579 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k78pt\" (UniqueName: \"kubernetes.io/projected/f4968375-00d3-4db1-93b4-db0808c464b2-kube-api-access-k78pt\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.552444 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.612156 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.644365 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.644963 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.646082 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.649996 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.657798 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.659704 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.722829 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.752342 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.753359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.756132 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.765626 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") pod \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.765658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") pod \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.765717 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") pod \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.765903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766009 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766057 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766109 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766129 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766862 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config" (OuterVolumeSpecName: "config") pod "a79c257a-a3a3-4db1-8f46-a0a499808dbf" (UID: "a79c257a-a3a3-4db1-8f46-a0a499808dbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.767575 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a79c257a-a3a3-4db1-8f46-a0a499808dbf" (UID: "a79c257a-a3a3-4db1-8f46-a0a499808dbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.778621 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96" (OuterVolumeSpecName: "kube-api-access-dck96") pod "a79c257a-a3a3-4db1-8f46-a0a499808dbf" (UID: "a79c257a-a3a3-4db1-8f46-a0a499808dbf"). InnerVolumeSpecName "kube-api-access-dck96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.796927 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.797232 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.798122 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.812281 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.813977 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.816190 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.823227 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868271 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868350 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868400 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868495 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868524 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868577 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868781 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.869724 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.869785 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.869866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.869993 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.870009 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.870020 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.886125 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.923661 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" event={"ID":"a79c257a-a3a3-4db1-8f46-a0a499808dbf","Type":"ContainerDied","Data":"d101eef9e73f679d6e83da351b32371512e88f55aedbbee4bdb3b09d5a79f5d8"} Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.923927 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.941426 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.942603 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zr22v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.951188 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.965171 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.980751 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981208 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981309 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981628 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981883 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.983193 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.983782 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.997400 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.014752 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.023121 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.046580 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.064560 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.066142 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.069899 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.073103 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.079164 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.083929 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.083971 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.085258 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.114370 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.116744 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.143947 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.188748 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.188807 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.199774 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.200969 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.204735 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.205019 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.205271 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7zw9t" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.208146 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.211476 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.269167 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q26vt"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.269561 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.289995 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290038 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290102 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290143 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-config\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290306 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmp48\" (UniqueName: \"kubernetes.io/projected/425d9be8-fa72-4cbe-bcc7-444e46e67a08-kube-api-access-tmp48\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290391 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-scripts\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290464 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290552 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290592 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290749 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.307438 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.386894 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.391995 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392154 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392330 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-config\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmp48\" (UniqueName: \"kubernetes.io/projected/425d9be8-fa72-4cbe-bcc7-444e46e67a08-kube-api-access-tmp48\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392405 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-scripts\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.393246 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-config\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.394163 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-scripts\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.395896 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.396102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.396581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.411950 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmp48\" (UniqueName: \"kubernetes.io/projected/425d9be8-fa72-4cbe-bcc7-444e46e67a08-kube-api-access-tmp48\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.529721 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.801303 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79c257a-a3a3-4db1-8f46-a0a499808dbf" path="/var/lib/kubelet/pods/a79c257a-a3a3-4db1-8f46-a0a499808dbf/volumes" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.831365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.886395 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.946572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q26vt" event={"ID":"f4968375-00d3-4db1-93b4-db0808c464b2","Type":"ContainerStarted","Data":"a263ee384fa36c603a203a13cd37f1a2106615328c59785eea8e98eb32a02baf"} Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.948943 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerStarted","Data":"6996c0b6103b18456eb99c9a9d46337d5c6171dee7a722eba0c900e6409fff97"} Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:51.974738 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.190576 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.191097 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.254036 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.334546 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.335438 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.347253 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.368262 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.368435 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.439681 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.440696 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.443483 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.450761 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.470354 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.470431 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.470476 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.470575 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.471339 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.488661 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.572361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.572711 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.573255 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.595244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.656526 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.755668 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.047035 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.141153 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.592720 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.612281 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.636442 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.642381 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.657443 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.664578 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.670848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.677584 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.122794 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.124713 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.126760 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.133155 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.152025 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.152386 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.152432 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.152961 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.153066 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf" gracePeriod=600 Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.222848 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.222983 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.324116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.324261 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.325375 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.344115 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.450123 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.562578 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.562659 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.613041 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:58 crc kubenswrapper[4949]: I0120 15:05:58.007462 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-825w7" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="registry-server" containerID="cri-o://a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3" gracePeriod=2 Jan 20 15:05:58 crc kubenswrapper[4949]: I0120 15:05:58.053631 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:58 crc kubenswrapper[4949]: I0120 15:05:58.447682 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:05:59 crc kubenswrapper[4949]: W0120 15:05:59.006490 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod625a0372_8b33_45fa_ad97_ad8e362be0fb.slice/crio-a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18 WatchSource:0}: Error finding container a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18: Status 404 returned error can't find the container with id a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18 Jan 20 15:05:59 crc kubenswrapper[4949]: W0120 15:05:59.007678 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod425d9be8_fa72_4cbe_bcc7_444e46e67a08.slice/crio-e128e46230bf48d78276034a37973e22d9cb0f525e1067fa0bd903c4452baf02 WatchSource:0}: Error finding container e128e46230bf48d78276034a37973e22d9cb0f525e1067fa0bd903c4452baf02: Status 404 returned error can't find the container with id e128e46230bf48d78276034a37973e22d9cb0f525e1067fa0bd903c4452baf02 Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.015425 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q26vt" event={"ID":"f4968375-00d3-4db1-93b4-db0808c464b2","Type":"ContainerStarted","Data":"2a76f84e4e9d777e8e9e611fb0e6cc406b27b36ffe1f3a02dbd8fd19ffa65008"} Jan 20 15:05:59 crc kubenswrapper[4949]: W0120 15:05:59.017351 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2993cec_87be_40ef_8f45_51ad7072f115.slice/crio-69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629 WatchSource:0}: Error finding container 69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629: Status 404 returned error can't find the container with id 69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629 Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.018360 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ctk5g" event={"ID":"625a0372-8b33-45fa-ad97-ad8e362be0fb","Type":"ContainerStarted","Data":"a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18"} Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.022370 4949 generic.go:334] "Generic (PLEG): container finished" podID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerID="a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3" exitCode=0 Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.022430 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerDied","Data":"a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3"} Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.024432 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf" exitCode=0 Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.025187 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf"} Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.025218 4949 scope.go:117] "RemoveContainer" containerID="680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c" Jan 20 15:05:59 crc kubenswrapper[4949]: W0120 15:05:59.040860 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa3acdd4_7817_4358_8afb_90399e3fa23f.slice/crio-e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d WatchSource:0}: Error finding container e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d: Status 404 returned error can't find the container with id e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.043610 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.070966 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q26vt" podStartSLOduration=10.070945321 podStartE2EDuration="10.070945321s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:05:59.041452743 +0000 UTC m=+954.851283641" watchObservedRunningTime="2026-01-20 15:05:59.070945321 +0000 UTC m=+954.880776189" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.188922 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.519412 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.566806 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") pod \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.566885 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") pod \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.567071 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") pod \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.568910 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities" (OuterVolumeSpecName: "utilities") pod "fec4e3eb-8e0c-4448-bd89-854714f2a98b" (UID: "fec4e3eb-8e0c-4448-bd89-854714f2a98b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.586283 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk" (OuterVolumeSpecName: "kube-api-access-p57xk") pod "fec4e3eb-8e0c-4448-bd89-854714f2a98b" (UID: "fec4e3eb-8e0c-4448-bd89-854714f2a98b"). InnerVolumeSpecName "kube-api-access-p57xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.652648 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fec4e3eb-8e0c-4448-bd89-854714f2a98b" (UID: "fec4e3eb-8e0c-4448-bd89-854714f2a98b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.653866 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.680298 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.680324 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.680335 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.034643 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31fc-account-create-update-cvjjl" event={"ID":"e2993cec-87be-40ef-8f45-51ad7072f115","Type":"ContainerStarted","Data":"dffa1d52cc18bc6e9f06ff6a01edc5037b2d08f37abf9a308e6bc44c3c94e753"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.035017 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31fc-account-create-update-cvjjl" event={"ID":"e2993cec-87be-40ef-8f45-51ad7072f115","Type":"ContainerStarted","Data":"69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.042280 4949 generic.go:334] "Generic (PLEG): container finished" podID="76c41597-7a3e-40c0-91d3-a73771874abe" containerID="0999acc591f05e388476f260970e6ec61337b2d0b65b72617ceb59d4faf4d31f" exitCode=0 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.042339 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" event={"ID":"76c41597-7a3e-40c0-91d3-a73771874abe","Type":"ContainerDied","Data":"0999acc591f05e388476f260970e6ec61337b2d0b65b72617ceb59d4faf4d31f"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.044435 4949 generic.go:334] "Generic (PLEG): container finished" podID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerID="68276b2a29712da0c8b68150ac12b491bc8fd4c69ba0f9839e1490af457e18ac" exitCode=0 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.044532 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerDied","Data":"68276b2a29712da0c8b68150ac12b491bc8fd4c69ba0f9839e1490af457e18ac"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.044563 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerStarted","Data":"f9f5d1619d230fe16e03f871babb60f8165c69870d0389a062447e2bf198b69d"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.052535 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ctk5g" event={"ID":"625a0372-8b33-45fa-ad97-ad8e362be0fb","Type":"ContainerStarted","Data":"164c3dcadf95a92cfbbf8afe3651c7b6ec563c58436faac06ef963587b8a851b"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.062779 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-31fc-account-create-update-cvjjl" podStartSLOduration=11.062757366 podStartE2EDuration="11.062757366s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:00.052185757 +0000 UTC m=+955.862016615" watchObservedRunningTime="2026-01-20 15:06:00.062757366 +0000 UTC m=+955.872588234" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.068160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zr22v" event={"ID":"81d427b9-3122-480c-8b2a-3862cdd2b3e2","Type":"ContainerStarted","Data":"094b6628ac46a9618593f47c854c4d7a9d9b69f90d2558abc891d2b0e99aaaf8"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.068214 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zr22v" event={"ID":"81d427b9-3122-480c-8b2a-3862cdd2b3e2","Type":"ContainerStarted","Data":"cdf521eae40e69f93d6255ba78fcd958a008dea83db5b21eb57ba5e7b4bb45ee"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.098417 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.145874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p27hz" event={"ID":"9d401b2e-722b-48cc-b8c4-19ffed9f43b8","Type":"ContainerStarted","Data":"75eeb26a7f68d468851df9c835f2048a52f6a0810b6e958edad4bcb11c72b760"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.145920 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p27hz" event={"ID":"9d401b2e-722b-48cc-b8c4-19ffed9f43b8","Type":"ContainerStarted","Data":"1d78e596c088c5f26c8586bee94d254159d14ebd3299015b971a3417bb01e379"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.170873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6dd3-account-create-update-k72x5" event={"ID":"fa3acdd4-7817-4358-8afb-90399e3fa23f","Type":"ContainerStarted","Data":"60cec251d342b33f2835307267fafe842a90ee5c67ed1111d71404e9b0f935b9"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.170922 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6dd3-account-create-update-k72x5" event={"ID":"fa3acdd4-7817-4358-8afb-90399e3fa23f","Type":"ContainerStarted","Data":"e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.173070 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d2-account-create-update-7xhv6" event={"ID":"5f223041-d962-43d8-81ad-0480ed09ff57","Type":"ContainerStarted","Data":"55c6563e40c843e59be4fafc63ead58bf30f2492a5f98973bbb68f0d2d05885c"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.173099 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d2-account-create-update-7xhv6" event={"ID":"5f223041-d962-43d8-81ad-0480ed09ff57","Type":"ContainerStarted","Data":"1154256b44f2049cb5a2d456438d141ab6e6260d36590284bfd2b45c26eb8830"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.174764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"425d9be8-fa72-4cbe-bcc7-444e46e67a08","Type":"ContainerStarted","Data":"e128e46230bf48d78276034a37973e22d9cb0f525e1067fa0bd903c4452baf02"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.178424 4949 generic.go:334] "Generic (PLEG): container finished" podID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerID="cbdd939af999bcfa3e96fc5079b45623220702fd2cd27bb16bfa120f2fbdfe75" exitCode=0 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.178487 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerDied","Data":"cbdd939af999bcfa3e96fc5079b45623220702fd2cd27bb16bfa120f2fbdfe75"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.182597 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.182597 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerDied","Data":"68cb870c434ab55233ed72365d1bc78370679ae532604bfe367507e2c57caf3a"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.182649 4949 scope.go:117] "RemoveContainer" containerID="a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.185499 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-ctk5g" podStartSLOduration=11.185485023 podStartE2EDuration="11.185485023s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:00.177115314 +0000 UTC m=+955.986946172" watchObservedRunningTime="2026-01-20 15:06:00.185485023 +0000 UTC m=+955.995315881" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.186548 4949 generic.go:334] "Generic (PLEG): container finished" podID="2cffaea4-923f-446d-9df7-7c35332af89d" containerID="182fc5d23cfc8772155fb0ae18fcbb7d700abd47011cd0c4eae8e341dd49f364" exitCode=0 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.188813 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9bkl" event={"ID":"2cffaea4-923f-446d-9df7-7c35332af89d","Type":"ContainerDied","Data":"182fc5d23cfc8772155fb0ae18fcbb7d700abd47011cd0c4eae8e341dd49f364"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.188843 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9bkl" event={"ID":"2cffaea4-923f-446d-9df7-7c35332af89d","Type":"ContainerStarted","Data":"572ce3ba037c78fb1b94d25482070b137d8f2c493c27a0d02a4b8659b34f894c"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.189864 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qsqhq" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="registry-server" containerID="cri-o://cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624" gracePeriod=2 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.275128 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68d2-account-create-update-7xhv6" podStartSLOduration=10.275109474 podStartE2EDuration="10.275109474s" podCreationTimestamp="2026-01-20 15:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:00.245631216 +0000 UTC m=+956.055462104" watchObservedRunningTime="2026-01-20 15:06:00.275109474 +0000 UTC m=+956.084940332" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.306860 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-p27hz" podStartSLOduration=3.306834933 podStartE2EDuration="3.306834933s" podCreationTimestamp="2026-01-20 15:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:00.266458146 +0000 UTC m=+956.076289014" watchObservedRunningTime="2026-01-20 15:06:00.306834933 +0000 UTC m=+956.116665801" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.693770 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.702636 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.797825 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" path="/var/lib/kubelet/pods/fec4e3eb-8e0c-4448-bd89-854714f2a98b/volumes" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.864357 4949 scope.go:117] "RemoveContainer" containerID="97885d51078adaf7b0201e67e5028b4306ed2924b2eb0990ba98b4acc792105a" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.865318 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.936472 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") pod \"76c41597-7a3e-40c0-91d3-a73771874abe\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.936740 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") pod \"76c41597-7a3e-40c0-91d3-a73771874abe\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.936895 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") pod \"76c41597-7a3e-40c0-91d3-a73771874abe\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.942189 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz" (OuterVolumeSpecName: "kube-api-access-bqgbz") pod "76c41597-7a3e-40c0-91d3-a73771874abe" (UID: "76c41597-7a3e-40c0-91d3-a73771874abe"). InnerVolumeSpecName "kube-api-access-bqgbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.967638 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config" (OuterVolumeSpecName: "config") pod "76c41597-7a3e-40c0-91d3-a73771874abe" (UID: "76c41597-7a3e-40c0-91d3-a73771874abe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.967791 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76c41597-7a3e-40c0-91d3-a73771874abe" (UID: "76c41597-7a3e-40c0-91d3-a73771874abe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.039025 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.039048 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.039060 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.195265 4949 generic.go:334] "Generic (PLEG): container finished" podID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerID="cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.195328 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerDied","Data":"cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.197486 4949 generic.go:334] "Generic (PLEG): container finished" podID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" containerID="094b6628ac46a9618593f47c854c4d7a9d9b69f90d2558abc891d2b0e99aaaf8" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.197591 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zr22v" event={"ID":"81d427b9-3122-480c-8b2a-3862cdd2b3e2","Type":"ContainerDied","Data":"094b6628ac46a9618593f47c854c4d7a9d9b69f90d2558abc891d2b0e99aaaf8"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.199027 4949 generic.go:334] "Generic (PLEG): container finished" podID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" containerID="75eeb26a7f68d468851df9c835f2048a52f6a0810b6e958edad4bcb11c72b760" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.199101 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p27hz" event={"ID":"9d401b2e-722b-48cc-b8c4-19ffed9f43b8","Type":"ContainerDied","Data":"75eeb26a7f68d468851df9c835f2048a52f6a0810b6e958edad4bcb11c72b760"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.201068 4949 generic.go:334] "Generic (PLEG): container finished" podID="5f223041-d962-43d8-81ad-0480ed09ff57" containerID="55c6563e40c843e59be4fafc63ead58bf30f2492a5f98973bbb68f0d2d05885c" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.201210 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d2-account-create-update-7xhv6" event={"ID":"5f223041-d962-43d8-81ad-0480ed09ff57","Type":"ContainerDied","Data":"55c6563e40c843e59be4fafc63ead58bf30f2492a5f98973bbb68f0d2d05885c"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.203322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerStarted","Data":"e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.203422 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.209000 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerStarted","Data":"498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.209680 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.212423 4949 generic.go:334] "Generic (PLEG): container finished" podID="e2993cec-87be-40ef-8f45-51ad7072f115" containerID="dffa1d52cc18bc6e9f06ff6a01edc5037b2d08f37abf9a308e6bc44c3c94e753" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.212474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31fc-account-create-update-cvjjl" event={"ID":"e2993cec-87be-40ef-8f45-51ad7072f115","Type":"ContainerDied","Data":"dffa1d52cc18bc6e9f06ff6a01edc5037b2d08f37abf9a308e6bc44c3c94e753"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.213710 4949 generic.go:334] "Generic (PLEG): container finished" podID="fa3acdd4-7817-4358-8afb-90399e3fa23f" containerID="60cec251d342b33f2835307267fafe842a90ee5c67ed1111d71404e9b0f935b9" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.213752 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6dd3-account-create-update-k72x5" event={"ID":"fa3acdd4-7817-4358-8afb-90399e3fa23f","Type":"ContainerDied","Data":"60cec251d342b33f2835307267fafe842a90ee5c67ed1111d71404e9b0f935b9"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.221506 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.221821 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" event={"ID":"76c41597-7a3e-40c0-91d3-a73771874abe","Type":"ContainerDied","Data":"d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.228052 4949 generic.go:334] "Generic (PLEG): container finished" podID="625a0372-8b33-45fa-ad97-ad8e362be0fb" containerID="164c3dcadf95a92cfbbf8afe3651c7b6ec563c58436faac06ef963587b8a851b" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.228882 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ctk5g" event={"ID":"625a0372-8b33-45fa-ad97-ad8e362be0fb","Type":"ContainerDied","Data":"164c3dcadf95a92cfbbf8afe3651c7b6ec563c58436faac06ef963587b8a851b"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.268859 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" podStartSLOduration=4.066035379 podStartE2EDuration="12.268843852s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="2026-01-20 15:05:50.859966366 +0000 UTC m=+946.669797234" lastFinishedPulling="2026-01-20 15:05:59.062774859 +0000 UTC m=+954.872605707" observedRunningTime="2026-01-20 15:06:01.267996124 +0000 UTC m=+957.077826992" watchObservedRunningTime="2026-01-20 15:06:01.268843852 +0000 UTC m=+957.078674700" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.289923 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-2vttb" podStartSLOduration=12.289904869 podStartE2EDuration="12.289904869s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:01.289727483 +0000 UTC m=+957.099558341" watchObservedRunningTime="2026-01-20 15:06:01.289904869 +0000 UTC m=+957.099735727" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.293128 4949 scope.go:117] "RemoveContainer" containerID="f03665b194c4174cebb25646bd720102812b8ac22b08bd892bcbfae2b602d925" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.356679 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.380550 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.433263 4949 scope.go:117] "RemoveContainer" containerID="0999acc591f05e388476f260970e6ec61337b2d0b65b72617ceb59d4faf4d31f" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.514320 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.543844 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zr22v" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669070 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") pod \"efe70405-ca2b-4d54-9b46-c798b4ff8583\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669142 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") pod \"efe70405-ca2b-4d54-9b46-c798b4ff8583\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669170 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") pod \"efe70405-ca2b-4d54-9b46-c798b4ff8583\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669283 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") pod \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669304 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") pod \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.670473 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81d427b9-3122-480c-8b2a-3862cdd2b3e2" (UID: "81d427b9-3122-480c-8b2a-3862cdd2b3e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.675315 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c" (OuterVolumeSpecName: "kube-api-access-b6b4c") pod "efe70405-ca2b-4d54-9b46-c798b4ff8583" (UID: "efe70405-ca2b-4d54-9b46-c798b4ff8583"). InnerVolumeSpecName "kube-api-access-b6b4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.688721 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn" (OuterVolumeSpecName: "kube-api-access-nf5tn") pod "81d427b9-3122-480c-8b2a-3862cdd2b3e2" (UID: "81d427b9-3122-480c-8b2a-3862cdd2b3e2"). InnerVolumeSpecName "kube-api-access-nf5tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.699505 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities" (OuterVolumeSpecName: "utilities") pod "efe70405-ca2b-4d54-9b46-c798b4ff8583" (UID: "efe70405-ca2b-4d54-9b46-c798b4ff8583"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.701468 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.736685 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9bkl" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.746335 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efe70405-ca2b-4d54-9b46-c798b4ff8583" (UID: "efe70405-ca2b-4d54-9b46-c798b4ff8583"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779612 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779658 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779671 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779684 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779697 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.881271 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") pod \"fa3acdd4-7817-4358-8afb-90399e3fa23f\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.881333 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") pod \"2cffaea4-923f-446d-9df7-7c35332af89d\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.881371 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") pod \"2cffaea4-923f-446d-9df7-7c35332af89d\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.882584 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") pod \"fa3acdd4-7817-4358-8afb-90399e3fa23f\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.882250 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cffaea4-923f-446d-9df7-7c35332af89d" (UID: "2cffaea4-923f-446d-9df7-7c35332af89d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.883830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa3acdd4-7817-4358-8afb-90399e3fa23f" (UID: "fa3acdd4-7817-4358-8afb-90399e3fa23f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.884574 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.884819 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.886802 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82" (OuterVolumeSpecName: "kube-api-access-fqg82") pod "fa3acdd4-7817-4358-8afb-90399e3fa23f" (UID: "fa3acdd4-7817-4358-8afb-90399e3fa23f"). InnerVolumeSpecName "kube-api-access-fqg82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.886948 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf" (OuterVolumeSpecName: "kube-api-access-f9ghf") pod "2cffaea4-923f-446d-9df7-7c35332af89d" (UID: "2cffaea4-923f-446d-9df7-7c35332af89d"). InnerVolumeSpecName "kube-api-access-f9ghf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.987144 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.987738 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.243748 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.243746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6dd3-account-create-update-k72x5" event={"ID":"fa3acdd4-7817-4358-8afb-90399e3fa23f","Type":"ContainerDied","Data":"e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.243885 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.246392 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"425d9be8-fa72-4cbe-bcc7-444e46e67a08","Type":"ContainerStarted","Data":"ce3eca8b2ae84d58cfa065da823ae3981882f17ec71ab5d111d3ef4c34b16dbd"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.246418 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"425d9be8-fa72-4cbe-bcc7-444e46e67a08","Type":"ContainerStarted","Data":"463c00c27cc46935ff8763ab4169da4d41be0787d739e82660de6f6fc8fb80f2"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.246451 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.256341 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerDied","Data":"eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.256601 4949 scope.go:117] "RemoveContainer" containerID="cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.256624 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.264170 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zr22v" event={"ID":"81d427b9-3122-480c-8b2a-3862cdd2b3e2","Type":"ContainerDied","Data":"cdf521eae40e69f93d6255ba78fcd958a008dea83db5b21eb57ba5e7b4bb45ee"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.264235 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdf521eae40e69f93d6255ba78fcd958a008dea83db5b21eb57ba5e7b4bb45ee" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.264384 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zr22v" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.277616 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9bkl" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.277972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9bkl" event={"ID":"2cffaea4-923f-446d-9df7-7c35332af89d","Type":"ContainerDied","Data":"572ce3ba037c78fb1b94d25482070b137d8f2c493c27a0d02a4b8659b34f894c"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.279066 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="572ce3ba037c78fb1b94d25482070b137d8f2c493c27a0d02a4b8659b34f894c" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.280369 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=10.008662808 podStartE2EDuration="12.28035583s" podCreationTimestamp="2026-01-20 15:05:50 +0000 UTC" firstStartedPulling="2026-01-20 15:05:59.025460409 +0000 UTC m=+954.835291267" lastFinishedPulling="2026-01-20 15:06:01.297153431 +0000 UTC m=+957.106984289" observedRunningTime="2026-01-20 15:06:02.273988046 +0000 UTC m=+958.083818904" watchObservedRunningTime="2026-01-20 15:06:02.28035583 +0000 UTC m=+958.090186688" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.295730 4949 scope.go:117] "RemoveContainer" containerID="99255b61b7f0088c38e333002cd268cb5398ee7d7f296126fdae25ebda59cb81" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.301627 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.313534 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.334859 4949 scope.go:117] "RemoveContainer" containerID="7116bccdb347550321602be8ab7c8a5038e543ed30d76d1e6cf7ae23a1c0748e" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.651934 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.731578 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p27hz" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.763469 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ctk5g" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.768381 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.803746 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" path="/var/lib/kubelet/pods/76c41597-7a3e-40c0-91d3-a73771874abe/volumes" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.804246 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" path="/var/lib/kubelet/pods/efe70405-ca2b-4d54-9b46-c798b4ff8583/volumes" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.809697 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") pod \"5f223041-d962-43d8-81ad-0480ed09ff57\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.809850 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") pod \"5f223041-d962-43d8-81ad-0480ed09ff57\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.810766 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f223041-d962-43d8-81ad-0480ed09ff57" (UID: "5f223041-d962-43d8-81ad-0480ed09ff57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.815616 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2" (OuterVolumeSpecName: "kube-api-access-rjcm2") pod "5f223041-d962-43d8-81ad-0480ed09ff57" (UID: "5f223041-d962-43d8-81ad-0480ed09ff57"). InnerVolumeSpecName "kube-api-access-rjcm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.910769 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") pod \"e2993cec-87be-40ef-8f45-51ad7072f115\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.910865 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") pod \"625a0372-8b33-45fa-ad97-ad8e362be0fb\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.910921 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") pod \"625a0372-8b33-45fa-ad97-ad8e362be0fb\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.910992 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") pod \"e2993cec-87be-40ef-8f45-51ad7072f115\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") pod \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911146 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") pod \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911574 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911599 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911628 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2993cec-87be-40ef-8f45-51ad7072f115" (UID: "e2993cec-87be-40ef-8f45-51ad7072f115"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911646 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "625a0372-8b33-45fa-ad97-ad8e362be0fb" (UID: "625a0372-8b33-45fa-ad97-ad8e362be0fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.912306 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d401b2e-722b-48cc-b8c4-19ffed9f43b8" (UID: "9d401b2e-722b-48cc-b8c4-19ffed9f43b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.915096 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c" (OuterVolumeSpecName: "kube-api-access-pzh6c") pod "e2993cec-87be-40ef-8f45-51ad7072f115" (UID: "e2993cec-87be-40ef-8f45-51ad7072f115"). InnerVolumeSpecName "kube-api-access-pzh6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.915110 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l" (OuterVolumeSpecName: "kube-api-access-mjw8l") pod "625a0372-8b33-45fa-ad97-ad8e362be0fb" (UID: "625a0372-8b33-45fa-ad97-ad8e362be0fb"). InnerVolumeSpecName "kube-api-access-mjw8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.915137 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr" (OuterVolumeSpecName: "kube-api-access-z9jxr") pod "9d401b2e-722b-48cc-b8c4-19ffed9f43b8" (UID: "9d401b2e-722b-48cc-b8c4-19ffed9f43b8"). InnerVolumeSpecName "kube-api-access-z9jxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012913 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012944 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012955 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012966 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012977 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012987 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.286931 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ctk5g" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.286925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ctk5g" event={"ID":"625a0372-8b33-45fa-ad97-ad8e362be0fb","Type":"ContainerDied","Data":"a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18"} Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.287425 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.290151 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p27hz" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.290136 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p27hz" event={"ID":"9d401b2e-722b-48cc-b8c4-19ffed9f43b8","Type":"ContainerDied","Data":"1d78e596c088c5f26c8586bee94d254159d14ebd3299015b971a3417bb01e379"} Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.290247 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d78e596c088c5f26c8586bee94d254159d14ebd3299015b971a3417bb01e379" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.291488 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31fc-account-create-update-cvjjl" event={"ID":"e2993cec-87be-40ef-8f45-51ad7072f115","Type":"ContainerDied","Data":"69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629"} Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.291547 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.291588 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.296055 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d2-account-create-update-7xhv6" event={"ID":"5f223041-d962-43d8-81ad-0480ed09ff57","Type":"ContainerDied","Data":"1154256b44f2049cb5a2d456438d141ab6e6260d36590284bfd2b45c26eb8830"} Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.296090 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1154256b44f2049cb5a2d456438d141ab6e6260d36590284bfd2b45c26eb8830" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.296120 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.613463 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614388 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="extract-content" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614407 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="extract-content" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614429 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614436 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614449 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625a0372-8b33-45fa-ad97-ad8e362be0fb" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614457 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="625a0372-8b33-45fa-ad97-ad8e362be0fb" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614466 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f223041-d962-43d8-81ad-0480ed09ff57" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614473 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f223041-d962-43d8-81ad-0480ed09ff57" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614484 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cffaea4-923f-446d-9df7-7c35332af89d" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614492 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cffaea4-923f-446d-9df7-7c35332af89d" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614507 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="extract-content" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614534 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="extract-content" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614546 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614552 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614561 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614568 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614585 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="extract-utilities" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614591 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="extract-utilities" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614602 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2993cec-87be-40ef-8f45-51ad7072f115" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614609 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2993cec-87be-40ef-8f45-51ad7072f115" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614619 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" containerName="init" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614627 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" containerName="init" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614641 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="extract-utilities" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614649 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="extract-utilities" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614664 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3acdd4-7817-4358-8afb-90399e3fa23f" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614671 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3acdd4-7817-4358-8afb-90399e3fa23f" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614684 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614691 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614888 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614905 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3acdd4-7817-4358-8afb-90399e3fa23f" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614920 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614927 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="625a0372-8b33-45fa-ad97-ad8e362be0fb" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614941 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2993cec-87be-40ef-8f45-51ad7072f115" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614950 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614965 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614976 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" containerName="init" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614987 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cffaea4-923f-446d-9df7-7c35332af89d" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614997 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f223041-d962-43d8-81ad-0480ed09ff57" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.615653 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.618741 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.619557 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-csksn" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.627155 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.759613 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.759674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.759729 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.759990 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.861826 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.861935 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.861959 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.861983 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.868766 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.869247 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.869430 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.884423 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.889720 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.933286 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.948190 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.948481 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="dnsmasq-dns" containerID="cri-o://498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686" gracePeriod=10 Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.325837 4949 generic.go:334] "Generic (PLEG): container finished" podID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerID="498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686" exitCode=0 Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.325928 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerDied","Data":"498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686"} Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.533822 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.584232 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") pod \"23edc910-bec7-4375-a48e-69abb1c9c3f2\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.584344 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") pod \"23edc910-bec7-4375-a48e-69abb1c9c3f2\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.584386 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") pod \"23edc910-bec7-4375-a48e-69abb1c9c3f2\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.584457 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") pod \"23edc910-bec7-4375-a48e-69abb1c9c3f2\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.591802 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr" (OuterVolumeSpecName: "kube-api-access-2vnmr") pod "23edc910-bec7-4375-a48e-69abb1c9c3f2" (UID: "23edc910-bec7-4375-a48e-69abb1c9c3f2"). InnerVolumeSpecName "kube-api-access-2vnmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.603305 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:06:06 crc kubenswrapper[4949]: W0120 15:06:06.613136 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc607bb7c_569c_4da2_b6bf_5b6c9b5c041e.slice/crio-1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae WatchSource:0}: Error finding container 1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae: Status 404 returned error can't find the container with id 1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.633366 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23edc910-bec7-4375-a48e-69abb1c9c3f2" (UID: "23edc910-bec7-4375-a48e-69abb1c9c3f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.634620 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23edc910-bec7-4375-a48e-69abb1c9c3f2" (UID: "23edc910-bec7-4375-a48e-69abb1c9c3f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.656211 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config" (OuterVolumeSpecName: "config") pod "23edc910-bec7-4375-a48e-69abb1c9c3f2" (UID: "23edc910-bec7-4375-a48e-69abb1c9c3f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.685897 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.685928 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.685936 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.685947 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.332797 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-48l6g" event={"ID":"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e","Type":"ContainerStarted","Data":"1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae"} Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.334972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerDied","Data":"6996c0b6103b18456eb99c9a9d46337d5c6171dee7a722eba0c900e6409fff97"} Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.335023 4949 scope.go:117] "RemoveContainer" containerID="498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686" Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.335169 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.355433 4949 scope.go:117] "RemoveContainer" containerID="cbdd939af999bcfa3e96fc5079b45623220702fd2cd27bb16bfa120f2fbdfe75" Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.362358 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.369047 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.396264 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.402616 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.422015 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:06:08 crc kubenswrapper[4949]: E0120 15:06:08.422382 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="dnsmasq-dns" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.422401 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="dnsmasq-dns" Jan 20 15:06:08 crc kubenswrapper[4949]: E0120 15:06:08.422420 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="init" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.422428 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="init" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.422636 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="dnsmasq-dns" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.423345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.425629 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.428941 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.521116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.521262 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.622752 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.622828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.623761 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.652602 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.782890 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.801656 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" path="/var/lib/kubelet/pods/23edc910-bec7-4375-a48e-69abb1c9c3f2/volumes" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.802863 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" path="/var/lib/kubelet/pods/9d401b2e-722b-48cc-b8c4-19ffed9f43b8/volumes" Jan 20 15:06:09 crc kubenswrapper[4949]: W0120 15:06:09.225315 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcafb93d7_a006_4cd2_99bd_e21022a5078f.slice/crio-9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80 WatchSource:0}: Error finding container 9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80: Status 404 returned error can't find the container with id 9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80 Jan 20 15:06:09 crc kubenswrapper[4949]: I0120 15:06:09.229488 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:06:09 crc kubenswrapper[4949]: I0120 15:06:09.352066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btxws" event={"ID":"cafb93d7-a006-4cd2-99bd-e21022a5078f","Type":"ContainerStarted","Data":"9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80"} Jan 20 15:06:10 crc kubenswrapper[4949]: I0120 15:06:10.361084 4949 generic.go:334] "Generic (PLEG): container finished" podID="cafb93d7-a006-4cd2-99bd-e21022a5078f" containerID="29003c0194acb9afdeb9e8174b3f33c4656b98673fb67369661844d652a26c45" exitCode=0 Jan 20 15:06:10 crc kubenswrapper[4949]: I0120 15:06:10.361169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btxws" event={"ID":"cafb93d7-a006-4cd2-99bd-e21022a5078f","Type":"ContainerDied","Data":"29003c0194acb9afdeb9e8174b3f33c4656b98673fb67369661844d652a26c45"} Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.370322 4949 generic.go:334] "Generic (PLEG): container finished" podID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerID="ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131" exitCode=0 Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.370352 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerDied","Data":"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131"} Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.671255 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btxws" Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.776455 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") pod \"cafb93d7-a006-4cd2-99bd-e21022a5078f\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.776625 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") pod \"cafb93d7-a006-4cd2-99bd-e21022a5078f\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.777120 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cafb93d7-a006-4cd2-99bd-e21022a5078f" (UID: "cafb93d7-a006-4cd2-99bd-e21022a5078f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.780499 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr" (OuterVolumeSpecName: "kube-api-access-4lhdr") pod "cafb93d7-a006-4cd2-99bd-e21022a5078f" (UID: "cafb93d7-a006-4cd2-99bd-e21022a5078f"). InnerVolumeSpecName "kube-api-access-4lhdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.878016 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.878054 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.381993 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerStarted","Data":"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d"} Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.382321 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.384350 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btxws" event={"ID":"cafb93d7-a006-4cd2-99bd-e21022a5078f","Type":"ContainerDied","Data":"9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80"} Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.384379 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80" Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.384428 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btxws" Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.714235 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.251219672 podStartE2EDuration="57.714209128s" podCreationTimestamp="2026-01-20 15:05:15 +0000 UTC" firstStartedPulling="2026-01-20 15:05:17.694850459 +0000 UTC m=+913.504681317" lastFinishedPulling="2026-01-20 15:05:37.157839915 +0000 UTC m=+932.967670773" observedRunningTime="2026-01-20 15:06:12.412324743 +0000 UTC m=+968.222155591" watchObservedRunningTime="2026-01-20 15:06:12.714209128 +0000 UTC m=+968.524039996" Jan 20 15:06:13 crc kubenswrapper[4949]: I0120 15:06:13.397683 4949 generic.go:334] "Generic (PLEG): container finished" podID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerID="7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c" exitCode=0 Jan 20 15:06:13 crc kubenswrapper[4949]: I0120 15:06:13.397759 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerDied","Data":"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c"} Jan 20 15:06:15 crc kubenswrapper[4949]: I0120 15:06:15.586971 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.261195 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nqhh2" podUID="c4179fca-4378-4347-a519-96120d9ae1cc" containerName="ovn-controller" probeResult="failure" output=< Jan 20 15:06:16 crc kubenswrapper[4949]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 20 15:06:16 crc kubenswrapper[4949]: > Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.278189 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.295149 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.495339 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:16 crc kubenswrapper[4949]: E0120 15:06:16.495630 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafb93d7-a006-4cd2-99bd-e21022a5078f" containerName="mariadb-account-create-update" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.495646 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafb93d7-a006-4cd2-99bd-e21022a5078f" containerName="mariadb-account-create-update" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.495821 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafb93d7-a006-4cd2-99bd-e21022a5078f" containerName="mariadb-account-create-update" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.496300 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.499371 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.511043 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.664827 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.664913 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.665115 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.665176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.665365 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.665556 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767409 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767750 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767788 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767816 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767840 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767919 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767925 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767864 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767980 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.768678 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.770361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.789025 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.823898 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.153962 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:19 crc kubenswrapper[4949]: W0120 15:06:19.168408 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod430c67d6_52aa_4386_8403_7be27bbe3abf.slice/crio-dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996 WatchSource:0}: Error finding container dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996: Status 404 returned error can't find the container with id dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996 Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.444810 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerStarted","Data":"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281"} Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.446169 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.447626 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2-config-wp2s8" event={"ID":"430c67d6-52aa-4386-8403-7be27bbe3abf","Type":"ContainerStarted","Data":"dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996"} Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.470083 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.833478286 podStartE2EDuration="1m3.470066173s" podCreationTimestamp="2026-01-20 15:05:16 +0000 UTC" firstStartedPulling="2026-01-20 15:05:17.915433611 +0000 UTC m=+913.725264469" lastFinishedPulling="2026-01-20 15:05:37.552021498 +0000 UTC m=+933.361852356" observedRunningTime="2026-01-20 15:06:19.468029528 +0000 UTC m=+975.277860396" watchObservedRunningTime="2026-01-20 15:06:19.470066173 +0000 UTC m=+975.279897031" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.584147 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.585823 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.596383 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.731320 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.731406 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.731474 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.833266 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.833629 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.833695 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.834136 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.834167 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.864338 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.904366 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.333277 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.456021 4949 generic.go:334] "Generic (PLEG): container finished" podID="430c67d6-52aa-4386-8403-7be27bbe3abf" containerID="a88c0c9a85129d9d6ee8562e849b80140bdaffa17c443b17a4de9fabf84ee113" exitCode=0 Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.456335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2-config-wp2s8" event={"ID":"430c67d6-52aa-4386-8403-7be27bbe3abf","Type":"ContainerDied","Data":"a88c0c9a85129d9d6ee8562e849b80140bdaffa17c443b17a4de9fabf84ee113"} Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.457103 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerStarted","Data":"be1513768022a765d8528f5739925bf8c2f745a2c54942f090b5b47b0cc445fd"} Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.458585 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-48l6g" event={"ID":"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e","Type":"ContainerStarted","Data":"3317c3d6f4446853e9f40dfeb54dd548432b68af8205e468da2990c7a1c463c4"} Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.259708 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nqhh2" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.281338 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-48l6g" podStartSLOduration=3.956552293 podStartE2EDuration="16.281315073s" podCreationTimestamp="2026-01-20 15:06:05 +0000 UTC" firstStartedPulling="2026-01-20 15:06:06.616428681 +0000 UTC m=+962.426259539" lastFinishedPulling="2026-01-20 15:06:18.941191461 +0000 UTC m=+974.751022319" observedRunningTime="2026-01-20 15:06:20.515681019 +0000 UTC m=+976.325511897" watchObservedRunningTime="2026-01-20 15:06:21.281315073 +0000 UTC m=+977.091145931" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.467345 4949 generic.go:334] "Generic (PLEG): container finished" podID="c22e6b14-e94a-4bb0-a034-60c355928551" containerID="91e2413b0353cd08abc8762744ba059b7a151348ee4608eb1c7a3ef0b3f6a658" exitCode=0 Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.467396 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerDied","Data":"91e2413b0353cd08abc8762744ba059b7a151348ee4608eb1c7a3ef0b3f6a658"} Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.759650 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.861801 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862094 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862148 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862216 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862458 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862486 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862547 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862508 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run" (OuterVolumeSpecName: "var-run") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862573 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863152 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863611 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts" (OuterVolumeSpecName: "scripts") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863712 4949 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863743 4949 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863762 4949 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863779 4949 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.870660 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq" (OuterVolumeSpecName: "kube-api-access-dkgxq") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "kube-api-access-dkgxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.965990 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.966049 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.479155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2-config-wp2s8" event={"ID":"430c67d6-52aa-4386-8403-7be27bbe3abf","Type":"ContainerDied","Data":"dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996"} Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.479201 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996" Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.479233 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.859093 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.866117 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:23 crc kubenswrapper[4949]: I0120 15:06:23.487048 4949 generic.go:334] "Generic (PLEG): container finished" podID="c22e6b14-e94a-4bb0-a034-60c355928551" containerID="df65c94aa13ba8c6b5396fc92732fe87ae0f8f77c763f304cef724603233c87d" exitCode=0 Jan 20 15:06:23 crc kubenswrapper[4949]: I0120 15:06:23.487096 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerDied","Data":"df65c94aa13ba8c6b5396fc92732fe87ae0f8f77c763f304cef724603233c87d"} Jan 20 15:06:24 crc kubenswrapper[4949]: I0120 15:06:24.800438 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430c67d6-52aa-4386-8403-7be27bbe3abf" path="/var/lib/kubelet/pods/430c67d6-52aa-4386-8403-7be27bbe3abf/volumes" Jan 20 15:06:26 crc kubenswrapper[4949]: I0120 15:06:26.518287 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerStarted","Data":"eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0"} Jan 20 15:06:26 crc kubenswrapper[4949]: I0120 15:06:26.539438 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fpdcn" podStartSLOduration=3.304784367 podStartE2EDuration="7.539421856s" podCreationTimestamp="2026-01-20 15:06:19 +0000 UTC" firstStartedPulling="2026-01-20 15:06:21.469250925 +0000 UTC m=+977.279081783" lastFinishedPulling="2026-01-20 15:06:25.703888404 +0000 UTC m=+981.513719272" observedRunningTime="2026-01-20 15:06:26.534418865 +0000 UTC m=+982.344249723" watchObservedRunningTime="2026-01-20 15:06:26.539421856 +0000 UTC m=+982.349252734" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.162299 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.534946 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:06:27 crc kubenswrapper[4949]: E0120 15:06:27.535246 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430c67d6-52aa-4386-8403-7be27bbe3abf" containerName="ovn-config" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.535258 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="430c67d6-52aa-4386-8403-7be27bbe3abf" containerName="ovn-config" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.535415 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="430c67d6-52aa-4386-8403-7be27bbe3abf" containerName="ovn-config" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.535902 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.545668 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.557508 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.558999 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.560415 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.584799 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.639908 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.641112 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.649623 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.653301 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.653399 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.653473 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.653570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.744461 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.745606 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.748310 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.754855 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.754923 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.754969 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755008 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755037 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755069 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755608 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755837 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.757913 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.776051 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.779356 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.819921 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.821024 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.830131 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.830410 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.832914 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.838996 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v78db" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.843778 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.852110 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.852467 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.853505 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.857218 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.857324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.857387 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.857440 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.858235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.867501 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.878813 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.890237 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.963676 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.964908 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.965681 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.969945 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971068 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971118 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971155 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971186 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971207 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971228 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971264 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.975707 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.978294 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.007004 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.063212 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.087352 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.087427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.087485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.087536 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.101128 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.101366 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.101428 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.104326 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.121272 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.123249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.151628 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.162531 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.210589 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.210741 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.211908 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.238120 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.334993 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.370673 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.384269 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.406857 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.471919 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.548684 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7a79-account-create-update-zrwtk" event={"ID":"de0efbc8-5060-4336-85af-23b901dd02fe","Type":"ContainerStarted","Data":"d94e7ded5171c9b3a8c47eac006dc9d7444a0d8d46e0ce93cdf09a52a174763c"} Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.569909 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdr7p" event={"ID":"900c89f3-a834-4a95-88cf-b6fda3fc9c58","Type":"ContainerStarted","Data":"4ffd101a8f2483091a302d6687d11f156dca57ac5e844250f312b474bda801d7"} Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.855262 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.909482 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.934153 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:06:28 crc kubenswrapper[4949]: W0120 15:06:28.946557 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2114c9bc_9691_4d96_8541_28ec5473428a.slice/crio-5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7 WatchSource:0}: Error finding container 5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7: Status 404 returned error can't find the container with id 5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7 Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.993109 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.002642 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:06:29 crc kubenswrapper[4949]: W0120 15:06:29.036268 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e8050e_32dc_4014_9bc7_cd06d127eb38.slice/crio-00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1 WatchSource:0}: Error finding container 00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1: Status 404 returned error can't find the container with id 00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1 Jan 20 15:06:29 crc kubenswrapper[4949]: W0120 15:06:29.065430 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5b36c38_4cb3_43d1_ade8_a1e554264870.slice/crio-6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51 WatchSource:0}: Error finding container 6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51: Status 404 returned error can't find the container with id 6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.578751 4949 generic.go:334] "Generic (PLEG): container finished" podID="6cbefde7-e737-4f29-9093-afc47f438c4c" containerID="a781bfdfd8762ae5e24e9222dfc90fa11c886930c4dbb418962538438aae1ac6" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.578849 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f894-account-create-update-zl66h" event={"ID":"6cbefde7-e737-4f29-9093-afc47f438c4c","Type":"ContainerDied","Data":"a781bfdfd8762ae5e24e9222dfc90fa11c886930c4dbb418962538438aae1ac6"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.578895 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f894-account-create-update-zl66h" event={"ID":"6cbefde7-e737-4f29-9093-afc47f438c4c","Type":"ContainerStarted","Data":"3fd734e70bf867cec3cedefb80ef4c42eb46641292e23913e6398e2e9904453f"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.582034 4949 generic.go:334] "Generic (PLEG): container finished" podID="de0efbc8-5060-4336-85af-23b901dd02fe" containerID="e4c82d229c717e5c0ffde6b9f00c036b0384157d1d756dd1f0e6b2ffaf868b06" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.582095 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7a79-account-create-update-zrwtk" event={"ID":"de0efbc8-5060-4336-85af-23b901dd02fe","Type":"ContainerDied","Data":"e4c82d229c717e5c0ffde6b9f00c036b0384157d1d756dd1f0e6b2ffaf868b06"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.583353 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kp4rp" event={"ID":"a8e8050e-32dc-4014-9bc7-cd06d127eb38","Type":"ContainerStarted","Data":"00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.585622 4949 generic.go:334] "Generic (PLEG): container finished" podID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" containerID="1b29787e73d44fce82b44b4dc092f944512be0b9918fd3a1f7b95398ec00eb0f" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.585680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9aa3-account-create-update-jbv24" event={"ID":"b57b3d7e-755f-43d2-aab3-f6d68a062a37","Type":"ContainerDied","Data":"1b29787e73d44fce82b44b4dc092f944512be0b9918fd3a1f7b95398ec00eb0f"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.585700 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9aa3-account-create-update-jbv24" event={"ID":"b57b3d7e-755f-43d2-aab3-f6d68a062a37","Type":"ContainerStarted","Data":"f03ba0db3ab62600d49d0d55b337a3e757a50a3b55e6815f98b4e6cacb6331e7"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.586805 4949 generic.go:334] "Generic (PLEG): container finished" podID="2114c9bc-9691-4d96-8541-28ec5473428a" containerID="c597795c21e284cf8447b4c1ba489d0c9f85fbd9dd3ef4fe3d4ba5bb6bd98cfb" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.586852 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd7sx" event={"ID":"2114c9bc-9691-4d96-8541-28ec5473428a","Type":"ContainerDied","Data":"c597795c21e284cf8447b4c1ba489d0c9f85fbd9dd3ef4fe3d4ba5bb6bd98cfb"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.586870 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd7sx" event={"ID":"2114c9bc-9691-4d96-8541-28ec5473428a","Type":"ContainerStarted","Data":"5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.587997 4949 generic.go:334] "Generic (PLEG): container finished" podID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" containerID="16aca3788ba46fca2c3a4e2db01394682bdf190975c465ad5615866366e0a008" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.588045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdr7p" event={"ID":"900c89f3-a834-4a95-88cf-b6fda3fc9c58","Type":"ContainerDied","Data":"16aca3788ba46fca2c3a4e2db01394682bdf190975c465ad5615866366e0a008"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.589071 4949 generic.go:334] "Generic (PLEG): container finished" podID="c5b36c38-4cb3-43d1-ade8-a1e554264870" containerID="d629aa6c999c4680b1c85169158551de91f7a34a4f27afe1607eb228257fc70c" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.589105 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4b86v" event={"ID":"c5b36c38-4cb3-43d1-ade8-a1e554264870","Type":"ContainerDied","Data":"d629aa6c999c4680b1c85169158551de91f7a34a4f27afe1607eb228257fc70c"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.589120 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4b86v" event={"ID":"c5b36c38-4cb3-43d1-ade8-a1e554264870","Type":"ContainerStarted","Data":"6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.905432 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.905837 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.951763 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:30 crc kubenswrapper[4949]: I0120 15:06:30.657181 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:30 crc kubenswrapper[4949]: I0120 15:06:30.735983 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.008162 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.160215 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") pod \"2114c9bc-9691-4d96-8541-28ec5473428a\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.160428 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") pod \"2114c9bc-9691-4d96-8541-28ec5473428a\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.161124 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2114c9bc-9691-4d96-8541-28ec5473428a" (UID: "2114c9bc-9691-4d96-8541-28ec5473428a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.166071 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4" (OuterVolumeSpecName: "kube-api-access-x77d4") pod "2114c9bc-9691-4d96-8541-28ec5473428a" (UID: "2114c9bc-9691-4d96-8541-28ec5473428a"). InnerVolumeSpecName "kube-api-access-x77d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.246357 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.252825 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.260962 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.262036 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.262060 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.270092 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.274563 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362705 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") pod \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362755 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") pod \"6cbefde7-e737-4f29-9093-afc47f438c4c\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362785 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") pod \"de0efbc8-5060-4336-85af-23b901dd02fe\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362821 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") pod \"de0efbc8-5060-4336-85af-23b901dd02fe\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") pod \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362934 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") pod \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362950 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") pod \"6cbefde7-e737-4f29-9093-afc47f438c4c\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363004 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") pod \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363027 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") pod \"c5b36c38-4cb3-43d1-ade8-a1e554264870\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363046 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") pod \"c5b36c38-4cb3-43d1-ade8-a1e554264870\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363621 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de0efbc8-5060-4336-85af-23b901dd02fe" (UID: "de0efbc8-5060-4336-85af-23b901dd02fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363874 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cbefde7-e737-4f29-9093-afc47f438c4c" (UID: "6cbefde7-e737-4f29-9093-afc47f438c4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.364005 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "900c89f3-a834-4a95-88cf-b6fda3fc9c58" (UID: "900c89f3-a834-4a95-88cf-b6fda3fc9c58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.364234 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5b36c38-4cb3-43d1-ade8-a1e554264870" (UID: "c5b36c38-4cb3-43d1-ade8-a1e554264870"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.366777 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7" (OuterVolumeSpecName: "kube-api-access-7zpf7") pod "6cbefde7-e737-4f29-9093-afc47f438c4c" (UID: "6cbefde7-e737-4f29-9093-afc47f438c4c"). InnerVolumeSpecName "kube-api-access-7zpf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.366815 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z" (OuterVolumeSpecName: "kube-api-access-4577z") pod "de0efbc8-5060-4336-85af-23b901dd02fe" (UID: "de0efbc8-5060-4336-85af-23b901dd02fe"). InnerVolumeSpecName "kube-api-access-4577z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.367272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b" (OuterVolumeSpecName: "kube-api-access-56t9b") pod "c5b36c38-4cb3-43d1-ade8-a1e554264870" (UID: "c5b36c38-4cb3-43d1-ade8-a1e554264870"). InnerVolumeSpecName "kube-api-access-56t9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.368014 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4" (OuterVolumeSpecName: "kube-api-access-q9zf4") pod "b57b3d7e-755f-43d2-aab3-f6d68a062a37" (UID: "b57b3d7e-755f-43d2-aab3-f6d68a062a37"). InnerVolumeSpecName "kube-api-access-q9zf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.368728 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt" (OuterVolumeSpecName: "kube-api-access-jfcwt") pod "900c89f3-a834-4a95-88cf-b6fda3fc9c58" (UID: "900c89f3-a834-4a95-88cf-b6fda3fc9c58"). InnerVolumeSpecName "kube-api-access-jfcwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465066 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465106 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465121 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465130 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465141 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465151 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465160 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465170 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465181 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.524932 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b57b3d7e-755f-43d2-aab3-f6d68a062a37" (UID: "b57b3d7e-755f-43d2-aab3-f6d68a062a37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.566184 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.606110 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f894-account-create-update-zl66h" event={"ID":"6cbefde7-e737-4f29-9093-afc47f438c4c","Type":"ContainerDied","Data":"3fd734e70bf867cec3cedefb80ef4c42eb46641292e23913e6398e2e9904453f"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.606146 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.606159 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd734e70bf867cec3cedefb80ef4c42eb46641292e23913e6398e2e9904453f" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.607308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7a79-account-create-update-zrwtk" event={"ID":"de0efbc8-5060-4336-85af-23b901dd02fe","Type":"ContainerDied","Data":"d94e7ded5171c9b3a8c47eac006dc9d7444a0d8d46e0ce93cdf09a52a174763c"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.607327 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.607339 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d94e7ded5171c9b3a8c47eac006dc9d7444a0d8d46e0ce93cdf09a52a174763c" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.621702 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9aa3-account-create-update-jbv24" event={"ID":"b57b3d7e-755f-43d2-aab3-f6d68a062a37","Type":"ContainerDied","Data":"f03ba0db3ab62600d49d0d55b337a3e757a50a3b55e6815f98b4e6cacb6331e7"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.621742 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f03ba0db3ab62600d49d0d55b337a3e757a50a3b55e6815f98b4e6cacb6331e7" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.621718 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.624123 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd7sx" event={"ID":"2114c9bc-9691-4d96-8541-28ec5473428a","Type":"ContainerDied","Data":"5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.624145 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.624165 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.626067 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdr7p" event={"ID":"900c89f3-a834-4a95-88cf-b6fda3fc9c58","Type":"ContainerDied","Data":"4ffd101a8f2483091a302d6687d11f156dca57ac5e844250f312b474bda801d7"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.626102 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffd101a8f2483091a302d6687d11f156dca57ac5e844250f312b474bda801d7" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.626162 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.636243 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.636854 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4b86v" event={"ID":"c5b36c38-4cb3-43d1-ade8-a1e554264870","Type":"ContainerDied","Data":"6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.636886 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51" Jan 20 15:06:32 crc kubenswrapper[4949]: I0120 15:06:32.654436 4949 generic.go:334] "Generic (PLEG): container finished" podID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" containerID="3317c3d6f4446853e9f40dfeb54dd548432b68af8205e468da2990c7a1c463c4" exitCode=0 Jan 20 15:06:32 crc kubenswrapper[4949]: I0120 15:06:32.654967 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fpdcn" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="registry-server" containerID="cri-o://eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0" gracePeriod=2 Jan 20 15:06:32 crc kubenswrapper[4949]: I0120 15:06:32.655044 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-48l6g" event={"ID":"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e","Type":"ContainerDied","Data":"3317c3d6f4446853e9f40dfeb54dd548432b68af8205e468da2990c7a1c463c4"} Jan 20 15:06:33 crc kubenswrapper[4949]: I0120 15:06:33.675406 4949 generic.go:334] "Generic (PLEG): container finished" podID="c22e6b14-e94a-4bb0-a034-60c355928551" containerID="eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0" exitCode=0 Jan 20 15:06:33 crc kubenswrapper[4949]: I0120 15:06:33.675473 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerDied","Data":"eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0"} Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.691204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-48l6g" event={"ID":"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e","Type":"ContainerDied","Data":"1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae"} Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.691615 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.777333 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.860316 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922416 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") pod \"c22e6b14-e94a-4bb0-a034-60c355928551\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922497 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") pod \"c22e6b14-e94a-4bb0-a034-60c355928551\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922544 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") pod \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922667 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") pod \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922744 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") pod \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922809 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") pod \"c22e6b14-e94a-4bb0-a034-60c355928551\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922834 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") pod \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.924611 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities" (OuterVolumeSpecName: "utilities") pod "c22e6b14-e94a-4bb0-a034-60c355928551" (UID: "c22e6b14-e94a-4bb0-a034-60c355928551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.927499 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd" (OuterVolumeSpecName: "kube-api-access-bxjpd") pod "c22e6b14-e94a-4bb0-a034-60c355928551" (UID: "c22e6b14-e94a-4bb0-a034-60c355928551"). InnerVolumeSpecName "kube-api-access-bxjpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.927613 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" (UID: "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.927965 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr" (OuterVolumeSpecName: "kube-api-access-fb2kr") pod "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" (UID: "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e"). InnerVolumeSpecName "kube-api-access-fb2kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.944322 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c22e6b14-e94a-4bb0-a034-60c355928551" (UID: "c22e6b14-e94a-4bb0-a034-60c355928551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.950047 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" (UID: "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.961871 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data" (OuterVolumeSpecName: "config-data") pod "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" (UID: "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026596 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026630 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026645 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026656 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026667 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026675 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026686 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.698978 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kp4rp" event={"ID":"a8e8050e-32dc-4014-9bc7-cd06d127eb38","Type":"ContainerStarted","Data":"db80a4f0bdc48f37dc22bc58775d3f05dd7c013f54339b1f5661562fd9df7daa"} Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.709947 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerDied","Data":"be1513768022a765d8528f5739925bf8c2f745a2c54942f090b5b47b0cc445fd"} Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.709970 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.710018 4949 scope.go:117] "RemoveContainer" containerID="eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.710027 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.718232 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kp4rp" podStartSLOduration=3.080959181 podStartE2EDuration="8.718207273s" podCreationTimestamp="2026-01-20 15:06:27 +0000 UTC" firstStartedPulling="2026-01-20 15:06:29.038652512 +0000 UTC m=+984.848483370" lastFinishedPulling="2026-01-20 15:06:34.675900604 +0000 UTC m=+990.485731462" observedRunningTime="2026-01-20 15:06:35.715762584 +0000 UTC m=+991.525593442" watchObservedRunningTime="2026-01-20 15:06:35.718207273 +0000 UTC m=+991.528038141" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.760834 4949 scope.go:117] "RemoveContainer" containerID="df65c94aa13ba8c6b5396fc92732fe87ae0f8f77c763f304cef724603233c87d" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.761510 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.767970 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.780976 4949 scope.go:117] "RemoveContainer" containerID="91e2413b0353cd08abc8762744ba059b7a151348ee4608eb1c7a3ef0b3f6a658" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.216507 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226756 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226775 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226785 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0efbc8-5060-4336-85af-23b901dd02fe" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226791 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0efbc8-5060-4336-85af-23b901dd02fe" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226799 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="extract-utilities" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226805 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="extract-utilities" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226819 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbefde7-e737-4f29-9093-afc47f438c4c" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226826 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbefde7-e737-4f29-9093-afc47f438c4c" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226832 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="extract-content" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226838 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="extract-content" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226847 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" containerName="glance-db-sync" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226852 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" containerName="glance-db-sync" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226863 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="registry-server" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226869 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="registry-server" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226879 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b36c38-4cb3-43d1-ade8-a1e554264870" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226886 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b36c38-4cb3-43d1-ade8-a1e554264870" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226893 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2114c9bc-9691-4d96-8541-28ec5473428a" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226898 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2114c9bc-9691-4d96-8541-28ec5473428a" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226916 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226921 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227068 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="registry-server" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227079 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" containerName="glance-db-sync" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227090 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227098 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2114c9bc-9691-4d96-8541-28ec5473428a" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227114 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0efbc8-5060-4336-85af-23b901dd02fe" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227121 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbefde7-e737-4f29-9093-afc47f438c4c" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227132 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b36c38-4cb3-43d1-ade8-a1e554264870" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227138 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227944 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.247759 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358006 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358081 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358119 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358170 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460097 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460119 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460134 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.461148 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.461249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.461280 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.461249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.501984 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.583769 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.801700 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" path="/var/lib/kubelet/pods/c22e6b14-e94a-4bb0-a034-60c355928551/volumes" Jan 20 15:06:37 crc kubenswrapper[4949]: W0120 15:06:37.050533 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod946f53c7_f2c2_4ffe_8378_32e4d2ae5d88.slice/crio-90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee WatchSource:0}: Error finding container 90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee: Status 404 returned error can't find the container with id 90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.052320 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.446793 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.730423 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerStarted","Data":"90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee"} Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.804759 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.806503 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.815694 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.878896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.879220 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.879955 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.981672 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.982131 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.982328 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.982732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.983196 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:38 crc kubenswrapper[4949]: I0120 15:06:38.004473 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:38 crc kubenswrapper[4949]: I0120 15:06:38.122635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:38 crc kubenswrapper[4949]: I0120 15:06:38.409578 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:06:38 crc kubenswrapper[4949]: I0120 15:06:38.759196 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerStarted","Data":"eeb7cbee20ed2b90b6962ccace8e1102267ebecaac9b546c8fad51ab9499282d"} Jan 20 15:06:40 crc kubenswrapper[4949]: I0120 15:06:40.776749 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerStarted","Data":"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb"} Jan 20 15:06:40 crc kubenswrapper[4949]: I0120 15:06:40.778695 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerStarted","Data":"7c2f2d8410be6184605e8ae8d978b47164b8a0cbb76d3f7c6288f8d1fc203aa8"} Jan 20 15:06:41 crc kubenswrapper[4949]: I0120 15:06:41.787187 4949 generic.go:334] "Generic (PLEG): container finished" podID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerID="7c2f2d8410be6184605e8ae8d978b47164b8a0cbb76d3f7c6288f8d1fc203aa8" exitCode=0 Jan 20 15:06:41 crc kubenswrapper[4949]: I0120 15:06:41.787272 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerDied","Data":"7c2f2d8410be6184605e8ae8d978b47164b8a0cbb76d3f7c6288f8d1fc203aa8"} Jan 20 15:06:41 crc kubenswrapper[4949]: I0120 15:06:41.788497 4949 generic.go:334] "Generic (PLEG): container finished" podID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerID="34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb" exitCode=0 Jan 20 15:06:41 crc kubenswrapper[4949]: I0120 15:06:41.788549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerDied","Data":"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb"} Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.803846 4949 generic.go:334] "Generic (PLEG): container finished" podID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" containerID="db80a4f0bdc48f37dc22bc58775d3f05dd7c013f54339b1f5661562fd9df7daa" exitCode=0 Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.803933 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kp4rp" event={"ID":"a8e8050e-32dc-4014-9bc7-cd06d127eb38","Type":"ContainerDied","Data":"db80a4f0bdc48f37dc22bc58775d3f05dd7c013f54339b1f5661562fd9df7daa"} Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.808409 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerStarted","Data":"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36"} Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.809233 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.855632 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" podStartSLOduration=6.855596354 podStartE2EDuration="6.855596354s" podCreationTimestamp="2026-01-20 15:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:42.850208151 +0000 UTC m=+998.660039009" watchObservedRunningTime="2026-01-20 15:06:42.855596354 +0000 UTC m=+998.665427222" Jan 20 15:06:43 crc kubenswrapper[4949]: I0120 15:06:43.818624 4949 generic.go:334] "Generic (PLEG): container finished" podID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerID="1bfde9055b8627100b5c93b232b289e018e33d5c7ac7bc51099c7c1742a2725c" exitCode=0 Jan 20 15:06:43 crc kubenswrapper[4949]: I0120 15:06:43.818681 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerDied","Data":"1bfde9055b8627100b5c93b232b289e018e33d5c7ac7bc51099c7c1742a2725c"} Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.147133 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.197076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") pod \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.197126 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") pod \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.197201 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") pod \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.205677 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg" (OuterVolumeSpecName: "kube-api-access-g85tg") pod "a8e8050e-32dc-4014-9bc7-cd06d127eb38" (UID: "a8e8050e-32dc-4014-9bc7-cd06d127eb38"). InnerVolumeSpecName "kube-api-access-g85tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.219030 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8e8050e-32dc-4014-9bc7-cd06d127eb38" (UID: "a8e8050e-32dc-4014-9bc7-cd06d127eb38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.236365 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data" (OuterVolumeSpecName: "config-data") pod "a8e8050e-32dc-4014-9bc7-cd06d127eb38" (UID: "a8e8050e-32dc-4014-9bc7-cd06d127eb38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.300334 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.300373 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.300387 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.827672 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerStarted","Data":"5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec"} Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.829172 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kp4rp" event={"ID":"a8e8050e-32dc-4014-9bc7-cd06d127eb38","Type":"ContainerDied","Data":"00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1"} Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.829208 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.829182 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.848595 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8xd7" podStartSLOduration=5.281441215 podStartE2EDuration="7.848577796s" podCreationTimestamp="2026-01-20 15:06:37 +0000 UTC" firstStartedPulling="2026-01-20 15:06:41.788998984 +0000 UTC m=+997.598829842" lastFinishedPulling="2026-01-20 15:06:44.356135565 +0000 UTC m=+1000.165966423" observedRunningTime="2026-01-20 15:06:44.842264063 +0000 UTC m=+1000.652094941" watchObservedRunningTime="2026-01-20 15:06:44.848577796 +0000 UTC m=+1000.658408654" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.115377 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.167706 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:45 crc kubenswrapper[4949]: E0120 15:06:45.168139 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" containerName="keystone-db-sync" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.168164 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" containerName="keystone-db-sync" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.168381 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" containerName="keystone-db-sync" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.169439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.187133 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.188905 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.195471 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.204231 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.204740 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.204785 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v78db" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.204737 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.210239 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224300 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224366 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224433 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224508 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224629 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224763 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224866 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.235629 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326352 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326461 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326506 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326565 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326607 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326627 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326694 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326718 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326756 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326797 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326837 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.328141 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.330274 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.336086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.345093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.354043 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.354503 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.354896 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.355351 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.369362 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.390072 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.408291 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.441108 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.442615 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.446674 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.446903 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.447465 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.447670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-rlxzz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.504242 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.504689 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.517569 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.569717 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.569990 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.570051 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.570462 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.570639 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676527 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676567 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676597 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676631 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.677310 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.677692 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.678552 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.732100 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.733632 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.738015 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.741563 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.741742 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.741876 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6qnbk" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.743824 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783576 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783634 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783671 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783744 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783787 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.808580 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.809642 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.816218 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-m8tbw" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.816647 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.816882 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.845885 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.859000 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="dnsmasq-dns" containerID="cri-o://98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" gracePeriod=10 Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.869934 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.874973 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886031 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886085 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886161 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886177 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886212 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886246 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886286 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886305 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.887436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.904791 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.909576 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.910376 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.912965 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.926468 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.926618 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.926690 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.932806 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hgk97" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.936943 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.938724 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.945170 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.945536 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.953932 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.990141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.990254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.990271 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.001288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.004614 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.005870 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.028178 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.087576 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.088633 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.089897 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mk2w7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091044 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091540 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091582 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091635 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091661 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091768 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091797 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091823 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.092067 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.092107 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.098923 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.107359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.116605 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.134602 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.149394 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.160664 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.162187 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.165796 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.167342 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.174909 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193636 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193933 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193954 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193977 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194032 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194053 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194071 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194118 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194145 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194168 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194187 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.203789 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.204280 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.207422 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.218579 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.219185 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.221929 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.222999 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.223329 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.225759 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.226658 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.227300 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.279901 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295130 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295212 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295257 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295295 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295318 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295343 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295359 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295403 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295444 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295469 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295486 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295507 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295538 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.296964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.305474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.305604 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.312267 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.322788 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.346081 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401504 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401591 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401624 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401651 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401729 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401758 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401795 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401836 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401862 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401889 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.403226 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.405689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.405919 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.406239 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.406637 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.406746 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.408343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.412989 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.432061 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.434731 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.452938 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.457459 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.503965 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: W0120 15:06:46.513231 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod324ec7e2_de25_442e_851f_ffea56e932b2.slice/crio-77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b WatchSource:0}: Error finding container 77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b: Status 404 returned error can't find the container with id 77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.523982 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.669030 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:46 crc kubenswrapper[4949]: W0120 15:06:46.679314 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd34f223a_75f1_410c_8541_cbf8cc7793d0.slice/crio-9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721 WatchSource:0}: Error finding container 9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721: Status 404 returned error can't find the container with id 9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721 Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.811048 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.874266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" event={"ID":"324ec7e2-de25-442e-851f-ffea56e932b2","Type":"ContainerStarted","Data":"77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b"} Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.888069 4949 generic.go:334] "Generic (PLEG): container finished" podID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerID="98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" exitCode=0 Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.888267 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.894148 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerDied","Data":"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36"} Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.894211 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerDied","Data":"90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee"} Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.894232 4949 scope.go:117] "RemoveContainer" containerID="98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.908753 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv77h" event={"ID":"d34f223a-75f1-410c-8541-cbf8cc7793d0","Type":"ContainerStarted","Data":"9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721"} Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.911889 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.911980 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.912065 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.912163 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.912189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.922440 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt" (OuterVolumeSpecName: "kube-api-access-2d8xt") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "kube-api-access-2d8xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.943207 4949 scope.go:117] "RemoveContainer" containerID="34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.977996 4949 scope.go:117] "RemoveContainer" containerID="98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.979257 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:46 crc kubenswrapper[4949]: E0120 15:06:46.983679 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36\": container with ID starting with 98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36 not found: ID does not exist" containerID="98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.983728 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36"} err="failed to get container status \"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36\": rpc error: code = NotFound desc = could not find container \"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36\": container with ID starting with 98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36 not found: ID does not exist" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.983758 4949 scope.go:117] "RemoveContainer" containerID="34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb" Jan 20 15:06:46 crc kubenswrapper[4949]: E0120 15:06:46.986942 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb\": container with ID starting with 34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb not found: ID does not exist" containerID="34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.986966 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb"} err="failed to get container status \"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb\": rpc error: code = NotFound desc = could not find container \"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb\": container with ID starting with 34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb not found: ID does not exist" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.000710 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.003138 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.004312 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config" (OuterVolumeSpecName: "config") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016450 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016486 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016496 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016509 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016534 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.113472 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:06:47 crc kubenswrapper[4949]: W0120 15:06:47.126832 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1f491ae_d6c7_4cc9_90f2_f76910f86c81.slice/crio-a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762 WatchSource:0}: Error finding container a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762: Status 404 returned error can't find the container with id a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762 Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.137722 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.233471 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.239823 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.434886 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.467652 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.479855 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.485735 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.493301 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.559947 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.918285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69655bc997-jlksz" event={"ID":"f1f491ae-d6c7-4cc9-90f2-f76910f86c81","Type":"ContainerStarted","Data":"a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.920026 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" event={"ID":"324ec7e2-de25-442e-851f-ffea56e932b2","Type":"ContainerStarted","Data":"cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.920105 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" containerName="init" containerID="cri-o://cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea" gracePeriod=10 Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.922018 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2fwjt" event={"ID":"c18369cb-0b5b-40f7-bc73-af04fb510f31","Type":"ContainerStarted","Data":"433534aab58a8907724519ebbdb734c9b17b626693f00598ad129acc054d365a"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.923421 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerStarted","Data":"d010c875444d4ba584246f10f1a99b66845b15ef9ef3b2384373a0f15b7f64f0"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.925352 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lm4wz" event={"ID":"f476712d-366a-4948-b282-66660a6d81c4","Type":"ContainerStarted","Data":"a7bdb1a05ecb96436eaee5571c55a1026eac70b28bfa92211ab6b3111805bc2c"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.932405 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerStarted","Data":"f00ec8d28626a4cd0a80c63c891ae1ccadb47e0b90177f99a5486b458b879328"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.935435 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv77h" event={"ID":"d34f223a-75f1-410c-8541-cbf8cc7793d0","Type":"ContainerStarted","Data":"1c3e4aa1ea308f9c97aea7bb6cb6f532b81619e27a772434fe622f19cd656cfa"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.950960 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9pm7" event={"ID":"1f96f008-7e3c-4512-bddd-51e42a0c7ce2","Type":"ContainerStarted","Data":"a255ba2b9bedbb556f04da75175101bb69c927fe2e1d472e5ff955e9dfc35f8c"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.953184 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"d9198a98b9f1f6021caa331f5093846a0dd1690786dc4510142a57f8e1848ff4"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.958225 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbd6l" event={"ID":"40994e0d-d911-4b6a-9ae9-96fbc4be8a36","Type":"ContainerStarted","Data":"34775a3d497f9b0858712e8d09736f5da67205a1958b3dd7d2f0dcef5907e8e7"} Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.003110 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.068136 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wv77h" podStartSLOduration=3.068116332 podStartE2EDuration="3.068116332s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:48.056445126 +0000 UTC m=+1003.866275984" watchObservedRunningTime="2026-01-20 15:06:48.068116332 +0000 UTC m=+1003.877947180" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.081082 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:06:48 crc kubenswrapper[4949]: E0120 15:06:48.081413 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="dnsmasq-dns" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.081425 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="dnsmasq-dns" Jan 20 15:06:48 crc kubenswrapper[4949]: E0120 15:06:48.081447 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="init" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.081453 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="init" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.081678 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="dnsmasq-dns" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.082545 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.113835 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.123904 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.123950 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.191163 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241595 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241651 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241805 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.343606 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.343921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.343968 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.344011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.344036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.344672 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.344822 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.345320 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.353615 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.405399 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.703386 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.822332 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" path="/var/lib/kubelet/pods/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88/volumes" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.971428 4949 generic.go:334] "Generic (PLEG): container finished" podID="324ec7e2-de25-442e-851f-ffea56e932b2" containerID="cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea" exitCode=0 Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.974531 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" event={"ID":"324ec7e2-de25-442e-851f-ffea56e932b2","Type":"ContainerDied","Data":"cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea"} Jan 20 15:06:49 crc kubenswrapper[4949]: I0120 15:06:49.211697 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s8xd7" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" probeResult="failure" output=< Jan 20 15:06:49 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:06:49 crc kubenswrapper[4949]: > Jan 20 15:06:49 crc kubenswrapper[4949]: W0120 15:06:49.241298 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1a77932_734e_416b_a182_5e84f6749d95.slice/crio-b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f WatchSource:0}: Error finding container b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f: Status 404 returned error can't find the container with id b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f Jan 20 15:06:49 crc kubenswrapper[4949]: I0120 15:06:49.246723 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:06:49 crc kubenswrapper[4949]: I0120 15:06:49.988953 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerStarted","Data":"b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f"} Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.626361 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.782644 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.782716 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.782750 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.782853 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.783035 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.788488 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2" (OuterVolumeSpecName: "kube-api-access-xwnl2") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "kube-api-access-xwnl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.811787 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.832062 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.832253 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config" (OuterVolumeSpecName: "config") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.848175 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885479 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885509 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885532 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885541 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885550 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.006064 4949 generic.go:334] "Generic (PLEG): container finished" podID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerID="55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f" exitCode=0 Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.006142 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerDied","Data":"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f"} Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.017709 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbd6l" event={"ID":"40994e0d-d911-4b6a-9ae9-96fbc4be8a36","Type":"ContainerStarted","Data":"f8b3cfedae50e77bf3dc2206f556c9d7bad02daab241ca1f62eeff8bbb5e7df7"} Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.027611 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" event={"ID":"324ec7e2-de25-442e-851f-ffea56e932b2","Type":"ContainerDied","Data":"77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b"} Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.027669 4949 scope.go:117] "RemoveContainer" containerID="cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea" Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.027796 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.066941 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lbd6l" podStartSLOduration=6.066886259 podStartE2EDuration="6.066886259s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:51.049693197 +0000 UTC m=+1006.859524055" watchObservedRunningTime="2026-01-20 15:06:51.066886259 +0000 UTC m=+1006.876717117" Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.111672 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.117926 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:52 crc kubenswrapper[4949]: I0120 15:06:52.067305 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerStarted","Data":"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8"} Jan 20 15:06:52 crc kubenswrapper[4949]: I0120 15:06:52.067856 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:52 crc kubenswrapper[4949]: I0120 15:06:52.093806 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" podStartSLOduration=7.093792283 podStartE2EDuration="7.093792283s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:52.086678135 +0000 UTC m=+1007.896509013" watchObservedRunningTime="2026-01-20 15:06:52.093792283 +0000 UTC m=+1007.903623131" Jan 20 15:06:52 crc kubenswrapper[4949]: I0120 15:06:52.802676 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" path="/var/lib/kubelet/pods/324ec7e2-de25-442e-851f-ffea56e932b2/volumes" Jan 20 15:06:53 crc kubenswrapper[4949]: I0120 15:06:53.083753 4949 generic.go:334] "Generic (PLEG): container finished" podID="d34f223a-75f1-410c-8541-cbf8cc7793d0" containerID="1c3e4aa1ea308f9c97aea7bb6cb6f532b81619e27a772434fe622f19cd656cfa" exitCode=0 Jan 20 15:06:53 crc kubenswrapper[4949]: I0120 15:06:53.084363 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv77h" event={"ID":"d34f223a-75f1-410c-8541-cbf8cc7793d0","Type":"ContainerDied","Data":"1c3e4aa1ea308f9c97aea7bb6cb6f532b81619e27a772434fe622f19cd656cfa"} Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.468153 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.487134 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:06:54 crc kubenswrapper[4949]: E0120 15:06:54.489545 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" containerName="init" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.489571 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" containerName="init" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.489761 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" containerName="init" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.499983 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.503811 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.506753 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566599 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566646 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566696 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566727 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566756 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566782 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.588945 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.619131 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66d45cfc44-ltr94"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.621573 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.634496 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d45cfc44-ltr94"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672591 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672671 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672720 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672754 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672777 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672847 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.677186 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.678093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.678547 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.678824 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.678861 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.682707 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.694365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-scripts\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08182d24-cea6-4daa-9dbb-efcb48b76434-logs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775848 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-combined-ca-bundle\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-config-data\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775908 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-tls-certs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775961 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqqv\" (UniqueName: \"kubernetes.io/projected/08182d24-cea6-4daa-9dbb-efcb48b76434-kube-api-access-wkqqv\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775981 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-secret-key\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.830662 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877457 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-scripts\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877500 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08182d24-cea6-4daa-9dbb-efcb48b76434-logs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877597 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-combined-ca-bundle\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877623 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-config-data\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877661 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-tls-certs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877723 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkqqv\" (UniqueName: \"kubernetes.io/projected/08182d24-cea6-4daa-9dbb-efcb48b76434-kube-api-access-wkqqv\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877742 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-secret-key\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.879001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08182d24-cea6-4daa-9dbb-efcb48b76434-logs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.881469 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-secret-key\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.881954 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-scripts\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.882945 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-config-data\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.883063 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-combined-ca-bundle\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.884427 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-tls-certs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.910825 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkqqv\" (UniqueName: \"kubernetes.io/projected/08182d24-cea6-4daa-9dbb-efcb48b76434-kube-api-access-wkqqv\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.948022 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.430364 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486031 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486210 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486249 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486288 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486335 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486375 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.492000 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.492689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.492781 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts" (OuterVolumeSpecName: "scripts") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.494845 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj" (OuterVolumeSpecName: "kube-api-access-jt9zj") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "kube-api-access-jt9zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.515665 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data" (OuterVolumeSpecName: "config-data") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.520104 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588386 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588422 4949 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588440 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588462 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588477 4949 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588491 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.116129 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv77h" event={"ID":"d34f223a-75f1-410c-8541-cbf8cc7793d0","Type":"ContainerDied","Data":"9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721"} Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.116410 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.116229 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.513669 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.563427 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.565266 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-2vttb" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" containerID="cri-o://e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746" gracePeriod=10 Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.614573 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.621058 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.701013 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:06:56 crc kubenswrapper[4949]: E0120 15:06:56.701415 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34f223a-75f1-410c-8541-cbf8cc7793d0" containerName="keystone-bootstrap" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.701432 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34f223a-75f1-410c-8541-cbf8cc7793d0" containerName="keystone-bootstrap" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.701627 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34f223a-75f1-410c-8541-cbf8cc7793d0" containerName="keystone-bootstrap" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.702230 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.706894 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.707183 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v78db" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.707290 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.707320 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.707582 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.714879 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.802423 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34f223a-75f1-410c-8541-cbf8cc7793d0" path="/var/lib/kubelet/pods/d34f223a-75f1-410c-8541-cbf8cc7793d0/volumes" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.808838 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.808986 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.809062 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.809107 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.809291 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.809462 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911115 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911199 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911247 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911446 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.918637 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.923613 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.924003 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.942978 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.944599 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.952577 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:57 crc kubenswrapper[4949]: I0120 15:06:57.033375 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:57 crc kubenswrapper[4949]: I0120 15:06:57.124617 4949 generic.go:334] "Generic (PLEG): container finished" podID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerID="e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746" exitCode=0 Jan 20 15:06:57 crc kubenswrapper[4949]: I0120 15:06:57.124671 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerDied","Data":"e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746"} Jan 20 15:06:58 crc kubenswrapper[4949]: I0120 15:06:58.177807 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:58 crc kubenswrapper[4949]: I0120 15:06:58.228423 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:58 crc kubenswrapper[4949]: I0120 15:06:58.415933 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:07:00 crc kubenswrapper[4949]: I0120 15:07:00.147969 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s8xd7" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" containerID="cri-o://5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" gracePeriod=2 Jan 20 15:07:00 crc kubenswrapper[4949]: I0120 15:07:00.887851 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-2vttb" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 20 15:07:01 crc kubenswrapper[4949]: I0120 15:07:01.159276 4949 generic.go:334] "Generic (PLEG): container finished" podID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" exitCode=0 Jan 20 15:07:01 crc kubenswrapper[4949]: I0120 15:07:01.159322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerDied","Data":"5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec"} Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.785294 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.785792 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjkqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-j9pm7_openstack(1f96f008-7e3c-4512-bddd-51e42a0c7ce2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.786981 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-j9pm7" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.806655 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.806813 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8fh76h586hd6hbdh574h5dh65fh548h65ch7ch554h5cdhc4h57ch5dchfch8ch568hbh5c5h65ch545hf6h55fh99h5cchc4h5fch559hb5h589q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fprdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-69655bc997-jlksz_openstack(f1f491ae-d6c7-4cc9-90f2-f76910f86c81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.811316 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-69655bc997-jlksz" podUID="f1f491ae-d6c7-4cc9-90f2-f76910f86c81" Jan 20 15:07:04 crc kubenswrapper[4949]: E0120 15:07:04.182009 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-j9pm7" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" Jan 20 15:07:11 crc kubenswrapper[4949]: I0120 15:07:05.887157 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-2vttb" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 20 15:07:11 crc kubenswrapper[4949]: E0120 15:07:08.123207 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec is running failed: container process not found" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 15:07:11 crc kubenswrapper[4949]: E0120 15:07:08.123656 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec is running failed: container process not found" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 15:07:11 crc kubenswrapper[4949]: E0120 15:07:08.124066 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec is running failed: container process not found" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 15:07:11 crc kubenswrapper[4949]: E0120 15:07:08.124098 4949 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-s8xd7" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" Jan 20 15:07:11 crc kubenswrapper[4949]: I0120 15:07:10.887119 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-2vttb" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 20 15:07:11 crc kubenswrapper[4949]: I0120 15:07:10.887564 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:07:13 crc kubenswrapper[4949]: E0120 15:07:13.186157 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 20 15:07:13 crc kubenswrapper[4949]: E0120 15:07:13.186752 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn6jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lm4wz_openstack(f476712d-366a-4948-b282-66660a6d81c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:07:13 crc kubenswrapper[4949]: E0120 15:07:13.187939 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lm4wz" podUID="f476712d-366a-4948-b282-66660a6d81c4" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.263360 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.284106 4949 generic.go:334] "Generic (PLEG): container finished" podID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" containerID="f8b3cfedae50e77bf3dc2206f556c9d7bad02daab241ca1f62eeff8bbb5e7df7" exitCode=0 Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.284168 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbd6l" event={"ID":"40994e0d-d911-4b6a-9ae9-96fbc4be8a36","Type":"ContainerDied","Data":"f8b3cfedae50e77bf3dc2206f556c9d7bad02daab241ca1f62eeff8bbb5e7df7"} Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.289574 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69655bc997-jlksz" event={"ID":"f1f491ae-d6c7-4cc9-90f2-f76910f86c81","Type":"ContainerDied","Data":"a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762"} Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.289683 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:07:13 crc kubenswrapper[4949]: E0120 15:07:13.291342 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-lm4wz" podUID="f476712d-366a-4948-b282-66660a6d81c4" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316016 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316086 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316128 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316197 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316229 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.317089 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts" (OuterVolumeSpecName: "scripts") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.318830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data" (OuterVolumeSpecName: "config-data") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.324400 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.324641 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs" (OuterVolumeSpecName: "logs") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.329763 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk" (OuterVolumeSpecName: "kube-api-access-fprdk") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "kube-api-access-fprdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419317 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419378 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419388 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419398 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419406 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.716490 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.725490 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.301028 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerDied","Data":"eeb7cbee20ed2b90b6962ccace8e1102267ebecaac9b546c8fad51ab9499282d"} Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.301070 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb7cbee20ed2b90b6962ccace8e1102267ebecaac9b546c8fad51ab9499282d" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.303938 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerDied","Data":"f9f5d1619d230fe16e03f871babb60f8165c69870d0389a062447e2bf198b69d"} Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.304038 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9f5d1619d230fe16e03f871babb60f8165c69870d0389a062447e2bf198b69d" Jan 20 15:07:14 crc kubenswrapper[4949]: E0120 15:07:14.327332 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 20 15:07:14 crc kubenswrapper[4949]: E0120 15:07:14.327840 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4htd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2fwjt_openstack(c18369cb-0b5b-40f7-bc73-af04fb510f31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:07:14 crc kubenswrapper[4949]: E0120 15:07:14.329912 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2fwjt" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.496694 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.526900 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544602 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544644 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544678 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544745 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544787 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.568475 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p" (OuterVolumeSpecName: "kube-api-access-t876p") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "kube-api-access-t876p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.648588 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") pod \"2bda8fe4-4e94-40d2-83fb-916ac550b698\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.650142 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") pod \"2bda8fe4-4e94-40d2-83fb-916ac550b698\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.650299 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") pod \"2bda8fe4-4e94-40d2-83fb-916ac550b698\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.650809 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.657013 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities" (OuterVolumeSpecName: "utilities") pod "2bda8fe4-4e94-40d2-83fb-916ac550b698" (UID: "2bda8fe4-4e94-40d2-83fb-916ac550b698"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.666164 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj" (OuterVolumeSpecName: "kube-api-access-8vszj") pod "2bda8fe4-4e94-40d2-83fb-916ac550b698" (UID: "2bda8fe4-4e94-40d2-83fb-916ac550b698"). InnerVolumeSpecName "kube-api-access-8vszj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.701807 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.711209 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.711689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.716946 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config" (OuterVolumeSpecName: "config") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753183 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") pod \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753253 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") pod \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753273 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") pod \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753709 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753726 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753736 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753745 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753753 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.758427 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r" (OuterVolumeSpecName: "kube-api-access-rlj6r") pod "40994e0d-d911-4b6a-9ae9-96fbc4be8a36" (UID: "40994e0d-d911-4b6a-9ae9-96fbc4be8a36"). InnerVolumeSpecName "kube-api-access-rlj6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.764889 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.778617 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config" (OuterVolumeSpecName: "config") pod "40994e0d-d911-4b6a-9ae9-96fbc4be8a36" (UID: "40994e0d-d911-4b6a-9ae9-96fbc4be8a36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.784669 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40994e0d-d911-4b6a-9ae9-96fbc4be8a36" (UID: "40994e0d-d911-4b6a-9ae9-96fbc4be8a36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.794445 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bda8fe4-4e94-40d2-83fb-916ac550b698" (UID: "2bda8fe4-4e94-40d2-83fb-916ac550b698"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.798789 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f491ae-d6c7-4cc9-90f2-f76910f86c81" path="/var/lib/kubelet/pods/f1f491ae-d6c7-4cc9-90f2-f76910f86c81/volumes" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.848318 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:07:14 crc kubenswrapper[4949]: W0120 15:07:14.850806 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod706427a3_6d1f_4a5e_9b50_d84499daec46.slice/crio-f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b WatchSource:0}: Error finding container f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b: Status 404 returned error can't find the container with id f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854858 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854883 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854895 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854907 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854916 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.945934 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.952592 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d45cfc44-ltr94"] Jan 20 15:07:14 crc kubenswrapper[4949]: W0120 15:07:14.953618 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26b5f79a_1adc_4ec3_a257_ce37600d2357.slice/crio-a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36 WatchSource:0}: Error finding container a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36: Status 404 returned error can't find the container with id a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36 Jan 20 15:07:14 crc kubenswrapper[4949]: W0120 15:07:14.955170 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08182d24_cea6_4daa_9dbb_efcb48b76434.slice/crio-7c1819de17dc876eccd8d7e8466e58b09cdf3fb78bb8d947dc66378df3c334d7 WatchSource:0}: Error finding container 7c1819de17dc876eccd8d7e8466e58b09cdf3fb78bb8d947dc66378df3c334d7: Status 404 returned error can't find the container with id 7c1819de17dc876eccd8d7e8466e58b09cdf3fb78bb8d947dc66378df3c334d7 Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.957927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.315244 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d45cfc44-ltr94" event={"ID":"08182d24-cea6-4daa-9dbb-efcb48b76434","Type":"ContainerStarted","Data":"9b7fcb23cf1b22d54783dddccc4d6105dc9312897b69812b01661d40eb317c5e"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.315608 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d45cfc44-ltr94" event={"ID":"08182d24-cea6-4daa-9dbb-efcb48b76434","Type":"ContainerStarted","Data":"7c1819de17dc876eccd8d7e8466e58b09cdf3fb78bb8d947dc66378df3c334d7"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.317296 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbd6l" event={"ID":"40994e0d-d911-4b6a-9ae9-96fbc4be8a36","Type":"ContainerDied","Data":"34775a3d497f9b0858712e8d09736f5da67205a1958b3dd7d2f0dcef5907e8e7"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.317342 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34775a3d497f9b0858712e8d09736f5da67205a1958b3dd7d2f0dcef5907e8e7" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.317303 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.325150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerStarted","Data":"1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.325189 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerStarted","Data":"893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.325318 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cbd48cfd5-mt6hk" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon-log" containerID="cri-o://893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec" gracePeriod=30 Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.325414 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cbd48cfd5-mt6hk" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon" containerID="cri-o://1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b" gracePeriod=30 Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.345195 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerStarted","Data":"03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.345260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerStarted","Data":"89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.345271 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerStarted","Data":"f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354103 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerStarted","Data":"6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354138 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerStarted","Data":"de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354217 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68ccd6ddcc-h9gfp" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon-log" containerID="cri-o://de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427" gracePeriod=30 Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354448 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68ccd6ddcc-h9gfp" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon" containerID="cri-o://6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6" gracePeriod=30 Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354776 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cbd48cfd5-mt6hk" podStartSLOduration=3.631959758 podStartE2EDuration="30.354766021s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.516938402 +0000 UTC m=+1003.326769260" lastFinishedPulling="2026-01-20 15:07:14.239744645 +0000 UTC m=+1030.049575523" observedRunningTime="2026-01-20 15:07:15.345751042 +0000 UTC m=+1031.155581920" watchObservedRunningTime="2026-01-20 15:07:15.354766021 +0000 UTC m=+1031.164596879" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.356929 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vx8lk" event={"ID":"26b5f79a-1adc-4ec3-a257-ce37600d2357","Type":"ContainerStarted","Data":"32ab9bcaba594aad212f54775fb1f42c09b044512f762d52c287bc1ce60443b2"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.356980 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vx8lk" event={"ID":"26b5f79a-1adc-4ec3-a257-ce37600d2357","Type":"ContainerStarted","Data":"a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.358690 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.358960 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.359032 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.360284 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2fwjt" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.403568 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68ccd6ddcc-h9gfp" podStartSLOduration=2.327409965 podStartE2EDuration="27.403550119s" podCreationTimestamp="2026-01-20 15:06:48 +0000 UTC" firstStartedPulling="2026-01-20 15:06:49.243571011 +0000 UTC m=+1005.053401859" lastFinishedPulling="2026-01-20 15:07:14.319711145 +0000 UTC m=+1030.129542013" observedRunningTime="2026-01-20 15:07:15.382532563 +0000 UTC m=+1031.192363411" watchObservedRunningTime="2026-01-20 15:07:15.403550119 +0000 UTC m=+1031.213380967" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.430072 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.466953 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vx8lk" podStartSLOduration=19.466936767 podStartE2EDuration="19.466936767s" podCreationTimestamp="2026-01-20 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:15.434275027 +0000 UTC m=+1031.244105875" watchObservedRunningTime="2026-01-20 15:07:15.466936767 +0000 UTC m=+1031.276767625" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.470771 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.499303 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.505040 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.524282 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.530907 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="extract-utilities" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.530951 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="extract-utilities" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.530970 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.530977 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.530988 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="extract-content" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.530995 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="extract-content" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.531005 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="init" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531010 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="init" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.531024 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531030 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.531048 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" containerName="neutron-db-sync" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531054 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" containerName="neutron-db-sync" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531280 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" containerName="neutron-db-sync" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531297 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531308 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.532208 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.598085 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683348 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683418 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683483 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683543 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785568 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785662 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785699 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.786771 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.786906 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.786951 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.787424 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.809721 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.833039 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.838732 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.843646 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.843913 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.844042 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-m8tbw" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.844185 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.860910 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.902004 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988502 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988588 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988606 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988673 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091146 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091284 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091314 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091336 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091357 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.099510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.100894 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.102334 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.108900 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.119230 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.199184 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.379544 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d45cfc44-ltr94" event={"ID":"08182d24-cea6-4daa-9dbb-efcb48b76434","Type":"ContainerStarted","Data":"50a40c3b443c9d1f88865f231374a380f38e032a993dc472376f4b5afa9af43b"} Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.416018 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66d45cfc44-ltr94" podStartSLOduration=22.415999899 podStartE2EDuration="22.415999899s" podCreationTimestamp="2026-01-20 15:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:16.408982153 +0000 UTC m=+1032.218813031" watchObservedRunningTime="2026-01-20 15:07:16.415999899 +0000 UTC m=+1032.225830757" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.438806 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68cb9b7c44-mz9j4" podStartSLOduration=22.438789421 podStartE2EDuration="22.438789421s" podCreationTimestamp="2026-01-20 15:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:16.435228887 +0000 UTC m=+1032.245059745" watchObservedRunningTime="2026-01-20 15:07:16.438789421 +0000 UTC m=+1032.248620279" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.471974 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.525094 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.812251 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" path="/var/lib/kubelet/pods/2bda8fe4-4e94-40d2-83fb-916ac550b698/volumes" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.813103 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" path="/var/lib/kubelet/pods/e64d5fa0-6c79-43df-9331-f9024cc3c9f4/volumes" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.813632 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.386548 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerStarted","Data":"ddacbe5809c0f3426708e64d9337ca1ab93d7f38d1a8f505676198c5a7a916e0"} Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.388160 4949 generic.go:334] "Generic (PLEG): container finished" podID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerID="8c227b56e33a53d202583a4f1ddca6603645856cbfcd9ad6c053606a3845fa21" exitCode=0 Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.388397 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerDied","Data":"8c227b56e33a53d202583a4f1ddca6603645856cbfcd9ad6c053606a3845fa21"} Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.388492 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerStarted","Data":"7264a419821a7cd4155fa26254f761dbcc032333908b45daed1c6c1c517da1c9"} Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.861755 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b8cd78967-6cmpj"] Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.863482 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.865899 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.866159 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.877029 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8cd78967-6cmpj"] Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.927712 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-public-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.927782 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.927912 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-ovndb-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.928127 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-internal-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.928187 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hn82\" (UniqueName: \"kubernetes.io/projected/dae84f47-70ef-4a10-ae62-dae601b0de81-kube-api-access-7hn82\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.928222 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-combined-ca-bundle\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.928358 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-httpd-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.029900 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-internal-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.029967 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hn82\" (UniqueName: \"kubernetes.io/projected/dae84f47-70ef-4a10-ae62-dae601b0de81-kube-api-access-7hn82\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030005 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-combined-ca-bundle\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-httpd-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030146 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-public-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030204 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030245 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-ovndb-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.037201 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-public-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.037284 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-combined-ca-bundle\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.037288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-httpd-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.037722 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-ovndb-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.045194 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-internal-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.048581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hn82\" (UniqueName: \"kubernetes.io/projected/dae84f47-70ef-4a10-ae62-dae601b0de81-kube-api-access-7hn82\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.049343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.180493 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.403545 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerStarted","Data":"65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a"} Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.404983 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.408662 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a"} Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.410210 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerStarted","Data":"5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391"} Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.410238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerStarted","Data":"bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48"} Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.411109 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.425583 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" podStartSLOduration=3.425568374 podStartE2EDuration="3.425568374s" podCreationTimestamp="2026-01-20 15:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:18.421892117 +0000 UTC m=+1034.231722995" watchObservedRunningTime="2026-01-20 15:07:18.425568374 +0000 UTC m=+1034.235399232" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.450878 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56bb6988d6-9n8x4" podStartSLOduration=3.450851397 podStartE2EDuration="3.450851397s" podCreationTimestamp="2026-01-20 15:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:18.443906904 +0000 UTC m=+1034.253737762" watchObservedRunningTime="2026-01-20 15:07:18.450851397 +0000 UTC m=+1034.260682285" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.704309 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:07:19 crc kubenswrapper[4949]: I0120 15:07:19.265252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8cd78967-6cmpj"] Jan 20 15:07:19 crc kubenswrapper[4949]: I0120 15:07:19.432151 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8cd78967-6cmpj" event={"ID":"dae84f47-70ef-4a10-ae62-dae601b0de81","Type":"ContainerStarted","Data":"7dec160cb1d986d5f09d779a008dfbb52758466dc46ab88b396d87cf74881d6b"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.443359 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8cd78967-6cmpj" event={"ID":"dae84f47-70ef-4a10-ae62-dae601b0de81","Type":"ContainerStarted","Data":"6a5ebdc1710d4ea48ad99f93aebfa15d6a020551dd135ac00dfb5980f16b0210"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.443934 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8cd78967-6cmpj" event={"ID":"dae84f47-70ef-4a10-ae62-dae601b0de81","Type":"ContainerStarted","Data":"6c809ddca02d8d3c380965b052e8c8bbdebd3de052894826ecfa7932a84693c9"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.443959 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.445303 4949 generic.go:334] "Generic (PLEG): container finished" podID="26b5f79a-1adc-4ec3-a257-ce37600d2357" containerID="32ab9bcaba594aad212f54775fb1f42c09b044512f762d52c287bc1ce60443b2" exitCode=0 Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.445365 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vx8lk" event={"ID":"26b5f79a-1adc-4ec3-a257-ce37600d2357","Type":"ContainerDied","Data":"32ab9bcaba594aad212f54775fb1f42c09b044512f762d52c287bc1ce60443b2"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.447879 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9pm7" event={"ID":"1f96f008-7e3c-4512-bddd-51e42a0c7ce2","Type":"ContainerStarted","Data":"57c84a2f332d6d3e1141d495167c3115a7ad4da207ef63deed652fdc8cda50e5"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.472464 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b8cd78967-6cmpj" podStartSLOduration=3.472445549 podStartE2EDuration="3.472445549s" podCreationTimestamp="2026-01-20 15:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:20.466983364 +0000 UTC m=+1036.276814222" watchObservedRunningTime="2026-01-20 15:07:20.472445549 +0000 UTC m=+1036.282276407" Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.506361 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j9pm7" podStartSLOduration=3.581924131 podStartE2EDuration="35.506343589s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.585162936 +0000 UTC m=+1003.394993794" lastFinishedPulling="2026-01-20 15:07:19.509582394 +0000 UTC m=+1035.319413252" observedRunningTime="2026-01-20 15:07:20.504213611 +0000 UTC m=+1036.314044469" watchObservedRunningTime="2026-01-20 15:07:20.506343589 +0000 UTC m=+1036.316174447" Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.478479 4949 generic.go:334] "Generic (PLEG): container finished" podID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" containerID="57c84a2f332d6d3e1141d495167c3115a7ad4da207ef63deed652fdc8cda50e5" exitCode=0 Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.478553 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9pm7" event={"ID":"1f96f008-7e3c-4512-bddd-51e42a0c7ce2","Type":"ContainerDied","Data":"57c84a2f332d6d3e1141d495167c3115a7ad4da207ef63deed652fdc8cda50e5"} Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.833931 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.834798 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.949356 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.949662 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.904703 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.956414 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.956637 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="dnsmasq-dns" containerID="cri-o://ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" gracePeriod=10 Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.978342 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.982148 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9pm7" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112661 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112771 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112812 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112828 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112916 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112935 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112972 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.113002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.113021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.113038 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.113076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.114138 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs" (OuterVolumeSpecName: "logs") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.119799 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts" (OuterVolumeSpecName: "scripts") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.123068 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.123215 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw" (OuterVolumeSpecName: "kube-api-access-qjkqw") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "kube-api-access-qjkqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.125647 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx" (OuterVolumeSpecName: "kube-api-access-q2ngx") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "kube-api-access-q2ngx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.126381 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.127743 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts" (OuterVolumeSpecName: "scripts") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.147660 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.151708 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data" (OuterVolumeSpecName: "config-data") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.153570 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data" (OuterVolumeSpecName: "config-data") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.155540 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.214640 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.214847 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.214942 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215014 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215086 4949 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215150 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215247 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215321 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215385 4949 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215446 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215591 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.384838 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.495935 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497528 4949 generic.go:334] "Generic (PLEG): container finished" podID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerID="ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" exitCode=0 Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497554 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerDied","Data":"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497640 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerDied","Data":"f00ec8d28626a4cd0a80c63c891ae1ccadb47e0b90177f99a5486b458b879328"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497665 4949 scope.go:117] "RemoveContainer" containerID="ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497577 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.502807 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vx8lk" event={"ID":"26b5f79a-1adc-4ec3-a257-ce37600d2357","Type":"ContainerDied","Data":"a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.502830 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.502862 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.504584 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9pm7" event={"ID":"1f96f008-7e3c-4512-bddd-51e42a0c7ce2","Type":"ContainerDied","Data":"a255ba2b9bedbb556f04da75175101bb69c927fe2e1d472e5ff955e9dfc35f8c"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.504605 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a255ba2b9bedbb556f04da75175101bb69c927fe2e1d472e5ff955e9dfc35f8c" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.504647 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9pm7" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.518828 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.518991 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.519075 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.519168 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.519238 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.542725 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2" (OuterVolumeSpecName: "kube-api-access-55xc2") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "kube-api-access-55xc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.550598 4949 scope.go:117] "RemoveContainer" containerID="55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.572650 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.577102 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.592009 4949 scope.go:117] "RemoveContainer" containerID="ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.592439 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8\": container with ID starting with ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8 not found: ID does not exist" containerID="ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.592473 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8"} err="failed to get container status \"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8\": rpc error: code = NotFound desc = could not find container \"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8\": container with ID starting with ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8 not found: ID does not exist" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.592532 4949 scope.go:117] "RemoveContainer" containerID="55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.592794 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f\": container with ID starting with 55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f not found: ID does not exist" containerID="55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.592836 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f"} err="failed to get container status \"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f\": rpc error: code = NotFound desc = could not find container \"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f\": container with ID starting with 55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f not found: ID does not exist" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.604031 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.608318 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config" (OuterVolumeSpecName: "config") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609481 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-754d6d4c8d-v7txj"] Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.609857 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="init" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609872 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="init" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.609882 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="dnsmasq-dns" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609889 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="dnsmasq-dns" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.609899 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b5f79a-1adc-4ec3-a257-ce37600d2357" containerName="keystone-bootstrap" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609905 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b5f79a-1adc-4ec3-a257-ce37600d2357" containerName="keystone-bootstrap" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.609916 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" containerName="placement-db-sync" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609922 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" containerName="placement-db-sync" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.610065 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" containerName="placement-db-sync" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.610082 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="dnsmasq-dns" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.610122 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b5f79a-1adc-4ec3-a257-ce37600d2357" containerName="keystone-bootstrap" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.611296 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615274 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mk2w7" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615422 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615494 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615556 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615656 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622403 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622427 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622438 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622446 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622455 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.626533 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-754d6d4c8d-v7txj"] Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724417 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-combined-ca-bundle\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724487 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-public-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724542 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-internal-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724625 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-scripts\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724654 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-config-data\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724699 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69138579-1fa8-4d89-b94f-46e3424d604c-logs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724732 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nknbn\" (UniqueName: \"kubernetes.io/projected/69138579-1fa8-4d89-b94f-46e3424d604c-kube-api-access-nknbn\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826050 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-config-data\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826119 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69138579-1fa8-4d89-b94f-46e3424d604c-logs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826143 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nknbn\" (UniqueName: \"kubernetes.io/projected/69138579-1fa8-4d89-b94f-46e3424d604c-kube-api-access-nknbn\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826205 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-combined-ca-bundle\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826236 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-public-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826264 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-internal-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-scripts\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.827247 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69138579-1fa8-4d89-b94f-46e3424d604c-logs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.832824 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-scripts\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.832910 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-combined-ca-bundle\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.833268 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-public-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.833558 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-internal-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.836361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-config-data\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.855094 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nknbn\" (UniqueName: \"kubernetes.io/projected/69138579-1fa8-4d89-b94f-46e3424d604c-kube-api-access-nknbn\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.942566 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.950903 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.969859 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.099370 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b69c674cf-wdfrq"] Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.107472 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.139303 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.144326 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.155153 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.155571 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.155709 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v78db" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.166176 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.195708 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b69c674cf-wdfrq"] Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254465 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz2q9\" (UniqueName: \"kubernetes.io/projected/7dd53c2b-505a-4783-9e2a-34857e6158ea-kube-api-access-cz2q9\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254568 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-fernet-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254623 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-config-data\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254654 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-credential-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254673 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-public-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254689 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-combined-ca-bundle\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254729 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-scripts\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254769 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-internal-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357485 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz2q9\" (UniqueName: \"kubernetes.io/projected/7dd53c2b-505a-4783-9e2a-34857e6158ea-kube-api-access-cz2q9\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357579 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-fernet-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357627 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-config-data\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357664 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-credential-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357690 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-public-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357712 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-combined-ca-bundle\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357764 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-scripts\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357819 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-internal-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.364655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-config-data\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.365081 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-internal-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.365942 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-credential-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.366288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-combined-ca-bundle\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.366949 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-fernet-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.370985 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-public-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.373001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-scripts\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.377875 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz2q9\" (UniqueName: \"kubernetes.io/projected/7dd53c2b-505a-4783-9e2a-34857e6158ea-kube-api-access-cz2q9\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.462682 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.567373 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-754d6d4c8d-v7txj"] Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.945069 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b69c674cf-wdfrq"] Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.546266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b69c674cf-wdfrq" event={"ID":"7dd53c2b-505a-4783-9e2a-34857e6158ea","Type":"ContainerStarted","Data":"bf8105bb971e0d08141087f8b97079725c4f104f877781f4eea677ba659357f5"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.546599 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.546610 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b69c674cf-wdfrq" event={"ID":"7dd53c2b-505a-4783-9e2a-34857e6158ea","Type":"ContainerStarted","Data":"b1eb891e17f431d0cbea0e94f6adb27f17d31a16563a13bef867a35db25b798e"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.549844 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754d6d4c8d-v7txj" event={"ID":"69138579-1fa8-4d89-b94f-46e3424d604c","Type":"ContainerStarted","Data":"d1ce75aa2076f268a07fda8dbf1ce6fe400bd6dcd83ab2f12e603da24dc24461"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.549881 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754d6d4c8d-v7txj" event={"ID":"69138579-1fa8-4d89-b94f-46e3424d604c","Type":"ContainerStarted","Data":"b40530f5d488f07d63141d181091d91a616a2221f4da44361e1a9618a85b4f37"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.549893 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754d6d4c8d-v7txj" event={"ID":"69138579-1fa8-4d89-b94f-46e3424d604c","Type":"ContainerStarted","Data":"58f3f8ca9a53b066df31967f56d7dcc93c42e798c7f5cc0df8e18866cbdc93f9"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.550050 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.550089 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.575290 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7b69c674cf-wdfrq" podStartSLOduration=1.575268697 podStartE2EDuration="1.575268697s" podCreationTimestamp="2026-01-20 15:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:28.565141374 +0000 UTC m=+1044.374972232" watchObservedRunningTime="2026-01-20 15:07:28.575268697 +0000 UTC m=+1044.385099585" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.591363 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-754d6d4c8d-v7txj" podStartSLOduration=2.5913449 podStartE2EDuration="2.5913449s" podCreationTimestamp="2026-01-20 15:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:28.582238049 +0000 UTC m=+1044.392068897" watchObservedRunningTime="2026-01-20 15:07:28.5913449 +0000 UTC m=+1044.401175758" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.817907 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" path="/var/lib/kubelet/pods/de517a3d-702a-4488-9a61-c1037cbdd5a2/volumes" Jan 20 15:07:29 crc kubenswrapper[4949]: I0120 15:07:29.559343 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lm4wz" event={"ID":"f476712d-366a-4948-b282-66660a6d81c4","Type":"ContainerStarted","Data":"8cb523447a664ee7d1c2eb08354a595f8dd6a512b238d12f561592cd541bb7a7"} Jan 20 15:07:29 crc kubenswrapper[4949]: I0120 15:07:29.582846 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lm4wz" podStartSLOduration=3.58346006 podStartE2EDuration="44.58282605s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.526759677 +0000 UTC m=+1003.336590525" lastFinishedPulling="2026-01-20 15:07:28.526125657 +0000 UTC m=+1044.335956515" observedRunningTime="2026-01-20 15:07:29.577159779 +0000 UTC m=+1045.386990647" watchObservedRunningTime="2026-01-20 15:07:29.58282605 +0000 UTC m=+1045.392656908" Jan 20 15:07:31 crc kubenswrapper[4949]: I0120 15:07:31.581183 4949 generic.go:334] "Generic (PLEG): container finished" podID="f476712d-366a-4948-b282-66660a6d81c4" containerID="8cb523447a664ee7d1c2eb08354a595f8dd6a512b238d12f561592cd541bb7a7" exitCode=0 Jan 20 15:07:31 crc kubenswrapper[4949]: I0120 15:07:31.581269 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lm4wz" event={"ID":"f476712d-366a-4948-b282-66660a6d81c4","Type":"ContainerDied","Data":"8cb523447a664ee7d1c2eb08354a595f8dd6a512b238d12f561592cd541bb7a7"} Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.532428 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.593822 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") pod \"f476712d-366a-4948-b282-66660a6d81c4\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.593950 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") pod \"f476712d-366a-4948-b282-66660a6d81c4\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.593988 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") pod \"f476712d-366a-4948-b282-66660a6d81c4\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.613363 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f476712d-366a-4948-b282-66660a6d81c4" (UID: "f476712d-366a-4948-b282-66660a6d81c4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.614240 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh" (OuterVolumeSpecName: "kube-api-access-xn6jh") pod "f476712d-366a-4948-b282-66660a6d81c4" (UID: "f476712d-366a-4948-b282-66660a6d81c4"). InnerVolumeSpecName "kube-api-access-xn6jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.614940 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lm4wz" event={"ID":"f476712d-366a-4948-b282-66660a6d81c4","Type":"ContainerDied","Data":"a7bdb1a05ecb96436eaee5571c55a1026eac70b28bfa92211ab6b3111805bc2c"} Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.615006 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7bdb1a05ecb96436eaee5571c55a1026eac70b28bfa92211ab6b3111805bc2c" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.615095 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.637441 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f476712d-366a-4948-b282-66660a6d81c4" (UID: "f476712d-366a-4948-b282-66660a6d81c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.696448 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.696725 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.696738 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.910137 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-56bfc57b96-w7nhj"] Jan 20 15:07:33 crc kubenswrapper[4949]: E0120 15:07:33.910468 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f476712d-366a-4948-b282-66660a6d81c4" containerName="barbican-db-sync" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.910484 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f476712d-366a-4948-b282-66660a6d81c4" containerName="barbican-db-sync" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.910672 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f476712d-366a-4948-b282-66660a6d81c4" containerName="barbican-db-sync" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.911468 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.913751 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.943895 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84d486fc9-sgwzr"] Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.945214 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.947639 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.993469 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56bfc57b96-w7nhj"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021396 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data-custom\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021456 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-combined-ca-bundle\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021554 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-logs\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021865 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021942 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmvp\" (UniqueName: \"kubernetes.io/projected/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-kube-api-access-hsmvp\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.086741 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84d486fc9-sgwzr"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.105657 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.107431 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.116025 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125099 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125164 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmvp\" (UniqueName: \"kubernetes.io/projected/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-kube-api-access-hsmvp\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125233 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125264 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125307 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-combined-ca-bundle\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125347 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data-custom\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125370 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-combined-ca-bundle\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125399 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125450 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125474 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fd28\" (UniqueName: \"kubernetes.io/projected/0f7e061d-75da-4fc4-80c8-1163e314ebb5-kube-api-access-7fd28\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125505 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-logs\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.126155 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-logs\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.126208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data-custom\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.126290 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.126336 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7e061d-75da-4fc4-80c8-1163e314ebb5-logs\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.132086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-combined-ca-bundle\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.132580 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.159924 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data-custom\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.161569 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.162868 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.168356 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.177985 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmvp\" (UniqueName: \"kubernetes.io/projected/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-kube-api-access-hsmvp\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.188087 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.227960 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228054 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228089 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228115 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228140 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228173 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-combined-ca-bundle\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228216 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228241 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228266 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228285 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228305 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fd28\" (UniqueName: \"kubernetes.io/projected/0f7e061d-75da-4fc4-80c8-1163e314ebb5-kube-api-access-7fd28\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228339 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data-custom\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228401 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7e061d-75da-4fc4-80c8-1163e314ebb5-logs\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228803 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7e061d-75da-4fc4-80c8-1163e314ebb5-logs\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.229610 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.231667 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.232315 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.233386 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.233745 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.237508 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data-custom\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.237650 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-combined-ca-bundle\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.238138 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.247672 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fd28\" (UniqueName: \"kubernetes.io/projected/0f7e061d-75da-4fc4-80c8-1163e314ebb5-kube-api-access-7fd28\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.251076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.268865 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.329983 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330118 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330164 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330196 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.334834 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.335475 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.335576 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.354960 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.505317 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.517808 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.834801 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.890262 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56bfc57b96-w7nhj"] Jan 20 15:07:34 crc kubenswrapper[4949]: W0120 15:07:34.905642 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b718a3_85a6_4bb6_9e17_9ff6936cb5c4.slice/crio-8a3a97b8bf02ad14a10a6644f98834c0e4f7bc71b5a24333be0798b1c5e562d3 WatchSource:0}: Error finding container 8a3a97b8bf02ad14a10a6644f98834c0e4f7bc71b5a24333be0798b1c5e562d3: Status 404 returned error can't find the container with id 8a3a97b8bf02ad14a10a6644f98834c0e4f7bc71b5a24333be0798b1c5e562d3 Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.952982 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66d45cfc44-ltr94" podUID="08182d24-cea6-4daa-9dbb-efcb48b76434" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.020138 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84d486fc9-sgwzr"] Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.174947 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.305295 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:35 crc kubenswrapper[4949]: W0120 15:07:35.314368 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c4987c_6ff4_4108_b5f9_6609525cf7ce.slice/crio-0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0 WatchSource:0}: Error finding container 0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0: Status 404 returned error can't find the container with id 0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.656175 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerStarted","Data":"4302ab47c368b5674e528e1d7aae1b710a2dfee3ac0f8609de5205f31154a236"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.657319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" event={"ID":"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4","Type":"ContainerStarted","Data":"8a3a97b8bf02ad14a10a6644f98834c0e4f7bc71b5a24333be0798b1c5e562d3"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.658418 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerStarted","Data":"0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.659833 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d486fc9-sgwzr" event={"ID":"0f7e061d-75da-4fc4-80c8-1163e314ebb5","Type":"ContainerStarted","Data":"649b128f3159abcbb35e72e4487070d44b730a8e1ea3e984efefd13ce20104c8"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.662908 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663100 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-central-agent" containerID="cri-o://470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668" gracePeriod=30 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663153 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="proxy-httpd" containerID="cri-o://757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5" gracePeriod=30 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663121 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663167 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="sg-core" containerID="cri-o://f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d" gracePeriod=30 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663233 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-notification-agent" containerID="cri-o://83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a" gracePeriod=30 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.702501 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.793561908 podStartE2EDuration="50.702477846s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.542896097 +0000 UTC m=+1003.352726955" lastFinishedPulling="2026-01-20 15:07:34.451812035 +0000 UTC m=+1050.261642893" observedRunningTime="2026-01-20 15:07:35.68788263 +0000 UTC m=+1051.497713508" watchObservedRunningTime="2026-01-20 15:07:35.702477846 +0000 UTC m=+1051.512308694" Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672440 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerID="757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5" exitCode=0 Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672787 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerID="f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d" exitCode=2 Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672796 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerID="470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668" exitCode=0 Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672619 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5"} Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d"} Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672834 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.064087 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-ffcb5df54-fhbnh"] Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.066671 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.069764 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.071959 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.077939 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-ffcb5df54-fhbnh"] Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094558 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094632 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data-custom\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-internal-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-combined-ca-bundle\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094750 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-public-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094770 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tjq\" (UniqueName: \"kubernetes.io/projected/25689957-1a77-40ab-8a4c-1e40a1524bac-kube-api-access-l6tjq\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094798 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25689957-1a77-40ab-8a4c-1e40a1524bac-logs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.195934 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-combined-ca-bundle\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.195996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-public-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196033 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tjq\" (UniqueName: \"kubernetes.io/projected/25689957-1a77-40ab-8a4c-1e40a1524bac-kube-api-access-l6tjq\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25689957-1a77-40ab-8a4c-1e40a1524bac-logs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196144 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196174 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data-custom\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196191 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-internal-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.197118 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25689957-1a77-40ab-8a4c-1e40a1524bac-logs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.200856 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-public-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.201730 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-combined-ca-bundle\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.201978 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data-custom\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.203223 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-internal-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.205045 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.212891 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tjq\" (UniqueName: \"kubernetes.io/projected/25689957-1a77-40ab-8a4c-1e40a1524bac-kube-api-access-l6tjq\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.388118 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.683833 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2fwjt" event={"ID":"c18369cb-0b5b-40f7-bc73-af04fb510f31","Type":"ContainerStarted","Data":"aed7fe52bc151294271b4f9cd142d75f94b93f932573c90067784cdc82a30aad"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.692461 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerStarted","Data":"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.692503 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerStarted","Data":"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.692548 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.692584 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.694927 4949 generic.go:334] "Generic (PLEG): container finished" podID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerID="75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3" exitCode=0 Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.694961 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerDied","Data":"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.710070 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2fwjt" podStartSLOduration=5.417277489 podStartE2EDuration="52.710045392s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.141367477 +0000 UTC m=+1002.951198335" lastFinishedPulling="2026-01-20 15:07:34.43413538 +0000 UTC m=+1050.243966238" observedRunningTime="2026-01-20 15:07:37.702868333 +0000 UTC m=+1053.512699191" watchObservedRunningTime="2026-01-20 15:07:37.710045392 +0000 UTC m=+1053.519876250" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.732743 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podStartSLOduration=3.732717197 podStartE2EDuration="3.732717197s" podCreationTimestamp="2026-01-20 15:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:37.724741842 +0000 UTC m=+1053.534572710" watchObservedRunningTime="2026-01-20 15:07:37.732717197 +0000 UTC m=+1053.542548055" Jan 20 15:07:38 crc kubenswrapper[4949]: I0120 15:07:38.354848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-ffcb5df54-fhbnh"] Jan 20 15:07:38 crc kubenswrapper[4949]: W0120 15:07:38.576665 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25689957_1a77_40ab_8a4c_1e40a1524bac.slice/crio-2c813b210f8d3888ffd0f0d2492b2b0ec8fb8d37fc956835eabf8f9d9b0e056c WatchSource:0}: Error finding container 2c813b210f8d3888ffd0f0d2492b2b0ec8fb8d37fc956835eabf8f9d9b0e056c: Status 404 returned error can't find the container with id 2c813b210f8d3888ffd0f0d2492b2b0ec8fb8d37fc956835eabf8f9d9b0e056c Jan 20 15:07:38 crc kubenswrapper[4949]: I0120 15:07:38.704653 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ffcb5df54-fhbnh" event={"ID":"25689957-1a77-40ab-8a4c-1e40a1524bac","Type":"ContainerStarted","Data":"2c813b210f8d3888ffd0f0d2492b2b0ec8fb8d37fc956835eabf8f9d9b0e056c"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.714377 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d486fc9-sgwzr" event={"ID":"0f7e061d-75da-4fc4-80c8-1163e314ebb5","Type":"ContainerStarted","Data":"052edb9be82749cb28eff78e8b63c5a2cac63350ba500c0c50dd0f85b7c1da40"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.714968 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d486fc9-sgwzr" event={"ID":"0f7e061d-75da-4fc4-80c8-1163e314ebb5","Type":"ContainerStarted","Data":"6f489479e016971ffe2a3d2ede4b55aa5ce8f7db149e5b078813074560bced9c"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.715917 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ffcb5df54-fhbnh" event={"ID":"25689957-1a77-40ab-8a4c-1e40a1524bac","Type":"ContainerStarted","Data":"9e61e01af472bf05d6d7c40fadd70ed45403f8ec58aa8dc6982f947334ed6e8e"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.715948 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ffcb5df54-fhbnh" event={"ID":"25689957-1a77-40ab-8a4c-1e40a1524bac","Type":"ContainerStarted","Data":"a7821e5a97cb61b41b5f3b0d62d2b6b6ef692607878a720c03e99fb5afb39090"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.715991 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.716031 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.717229 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerStarted","Data":"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.717338 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.719279 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" event={"ID":"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4","Type":"ContainerStarted","Data":"477a2ea5f293cd656ceee7c1a14b62bd3ba78bc275f987d9d445f6e4baae801b"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.719320 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" event={"ID":"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4","Type":"ContainerStarted","Data":"5019b346386e48fc9c1b559559e0adc8b49ae61a7841975c4ba132f1883eae38"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.734130 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84d486fc9-sgwzr" podStartSLOduration=2.860268559 podStartE2EDuration="6.734106055s" podCreationTimestamp="2026-01-20 15:07:33 +0000 UTC" firstStartedPulling="2026-01-20 15:07:35.067409895 +0000 UTC m=+1050.877240753" lastFinishedPulling="2026-01-20 15:07:38.941247371 +0000 UTC m=+1054.751078249" observedRunningTime="2026-01-20 15:07:39.730720397 +0000 UTC m=+1055.540551265" watchObservedRunningTime="2026-01-20 15:07:39.734106055 +0000 UTC m=+1055.543936913" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.750416 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" podStartSLOduration=2.713776597 podStartE2EDuration="6.750395985s" podCreationTimestamp="2026-01-20 15:07:33 +0000 UTC" firstStartedPulling="2026-01-20 15:07:34.914465458 +0000 UTC m=+1050.724296316" lastFinishedPulling="2026-01-20 15:07:38.951084846 +0000 UTC m=+1054.760915704" observedRunningTime="2026-01-20 15:07:39.747297556 +0000 UTC m=+1055.557128444" watchObservedRunningTime="2026-01-20 15:07:39.750395985 +0000 UTC m=+1055.560226843" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.769395 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" podStartSLOduration=6.769376382 podStartE2EDuration="6.769376382s" podCreationTimestamp="2026-01-20 15:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:39.765118876 +0000 UTC m=+1055.574949734" watchObservedRunningTime="2026-01-20 15:07:39.769376382 +0000 UTC m=+1055.579207240" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.805751 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-ffcb5df54-fhbnh" podStartSLOduration=2.805707412 podStartE2EDuration="2.805707412s" podCreationTimestamp="2026-01-20 15:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:39.796357044 +0000 UTC m=+1055.606187902" watchObservedRunningTime="2026-01-20 15:07:39.805707412 +0000 UTC m=+1055.615538270" Jan 20 15:07:40 crc kubenswrapper[4949]: I0120 15:07:40.742645 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerID="83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a" exitCode=0 Jan 20 15:07:40 crc kubenswrapper[4949]: I0120 15:07:40.742905 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a"} Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.180665 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.321788 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.321851 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322011 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322084 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322118 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322227 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322270 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322454 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322728 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.327415 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg" (OuterVolumeSpecName: "kube-api-access-q2fmg") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "kube-api-access-q2fmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.327624 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts" (OuterVolumeSpecName: "scripts") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.348582 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.398482 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425287 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425321 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425333 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425345 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425353 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.436465 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data" (OuterVolumeSpecName: "config-data") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.526848 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.753733 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"d9198a98b9f1f6021caa331f5093846a0dd1690786dc4510142a57f8e1848ff4"} Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.753787 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.753813 4949 scope.go:117] "RemoveContainer" containerID="757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.792355 4949 scope.go:117] "RemoveContainer" containerID="f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.798382 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.811060 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.819289 4949 scope.go:117] "RemoveContainer" containerID="83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847230 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:41 crc kubenswrapper[4949]: E0120 15:07:41.847578 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="sg-core" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847593 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="sg-core" Jan 20 15:07:41 crc kubenswrapper[4949]: E0120 15:07:41.847607 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="proxy-httpd" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847613 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="proxy-httpd" Jan 20 15:07:41 crc kubenswrapper[4949]: E0120 15:07:41.847624 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-notification-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847631 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-notification-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: E0120 15:07:41.847657 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-central-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847662 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-central-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847815 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="proxy-httpd" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847824 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-central-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847837 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-notification-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847844 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="sg-core" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.848941 4949 scope.go:117] "RemoveContainer" containerID="470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.854155 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.858643 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.858899 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.871348 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.039387 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.039884 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.039926 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.040008 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.040033 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.040116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.040162 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142417 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142470 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142507 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142706 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142758 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142787 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.143284 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.145154 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.149402 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.150393 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.150671 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.151955 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.166890 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.183754 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: W0120 15:07:42.617105 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5dc0c3_1563_4605_81e6_2ed8a343353b.slice/crio-a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74 WatchSource:0}: Error finding container a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74: Status 404 returned error can't find the container with id a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74 Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.617995 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.762452 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74"} Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.765821 4949 generic.go:334] "Generic (PLEG): container finished" podID="c18369cb-0b5b-40f7-bc73-af04fb510f31" containerID="aed7fe52bc151294271b4f9cd142d75f94b93f932573c90067784cdc82a30aad" exitCode=0 Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.765915 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2fwjt" event={"ID":"c18369cb-0b5b-40f7-bc73-af04fb510f31","Type":"ContainerDied","Data":"aed7fe52bc151294271b4f9cd142d75f94b93f932573c90067784cdc82a30aad"} Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.801734 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" path="/var/lib/kubelet/pods/d4755b36-8e78-4503-aa84-efb904d6e6d9/volumes" Jan 20 15:07:43 crc kubenswrapper[4949]: I0120 15:07:43.780922 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.186067 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.280838 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281012 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281152 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281191 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281226 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281266 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281730 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.286600 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd" (OuterVolumeSpecName: "kube-api-access-v4htd") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "kube-api-access-v4htd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.286617 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts" (OuterVolumeSpecName: "scripts") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.289561 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.313993 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.345696 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data" (OuterVolumeSpecName: "config-data") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383067 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383119 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383131 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383142 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383153 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383164 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.507611 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.591889 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.592125 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="dnsmasq-dns" containerID="cri-o://65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a" gracePeriod=10 Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.828229 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.828495 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2fwjt" event={"ID":"c18369cb-0b5b-40f7-bc73-af04fb510f31","Type":"ContainerDied","Data":"433534aab58a8907724519ebbdb734c9b17b626693f00598ad129acc054d365a"} Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.828535 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433534aab58a8907724519ebbdb734c9b17b626693f00598ad129acc054d365a" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.828597 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.869781 4949 generic.go:334] "Generic (PLEG): container finished" podID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerID="65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a" exitCode=0 Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.869826 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerDied","Data":"65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.105751 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:07:45 crc kubenswrapper[4949]: E0120 15:07:45.106790 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" containerName="cinder-db-sync" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.106828 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" containerName="cinder-db-sync" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.107062 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" containerName="cinder-db-sync" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.108199 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.134772 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.137134 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.139091 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6qnbk" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.139563 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.139941 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.155036 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.185009 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.199774 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207830 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207923 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207981 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311420 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311546 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311579 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311635 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311662 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311735 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311780 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.312839 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.313354 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.313370 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.313506 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.321383 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.323363 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.332891 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.338205 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.372727 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415725 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415789 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415882 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415911 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415984 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416028 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416100 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416129 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416179 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416200 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416268 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416300 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.419818 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.429841 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.432463 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.433132 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.444128 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.449466 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.455537 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.494963 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521582 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521643 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521659 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521698 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521742 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521780 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.525355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.525905 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.528868 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.529012 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.530571 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.543063 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.548866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.671354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.684762 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829301 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829454 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829604 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829716 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829754 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.835129 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm" (OuterVolumeSpecName: "kube-api-access-gxwzm") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "kube-api-access-gxwzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.876865 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.884674 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerDied","Data":"7264a419821a7cd4155fa26254f761dbcc032333908b45daed1c6c1c517da1c9"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.884734 4949 scope.go:117] "RemoveContainer" containerID="65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.884854 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.891205 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.903135 4949 generic.go:334] "Generic (PLEG): container finished" podID="f1a77932-734e-416b-a182-5e84f6749d95" containerID="6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6" exitCode=137 Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.903168 4949 generic.go:334] "Generic (PLEG): container finished" podID="f1a77932-734e-416b-a182-5e84f6749d95" containerID="de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427" exitCode=137 Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.903215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerDied","Data":"6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.903241 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerDied","Data":"de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.905286 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.921721 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.925506 4949 generic.go:334] "Generic (PLEG): container finished" podID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerID="1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b" exitCode=137 Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.925635 4949 generic.go:334] "Generic (PLEG): container finished" podID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerID="893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec" exitCode=137 Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.925580 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerDied","Data":"1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.925822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerDied","Data":"893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.940937 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.943088 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.943299 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.943380 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.961272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config" (OuterVolumeSpecName: "config") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.986662 4949 scope.go:117] "RemoveContainer" containerID="8c227b56e33a53d202583a4f1ddca6603645856cbfcd9ad6c053606a3845fa21" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.045528 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.131125 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.216820 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.237220 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.237643 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264395 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264612 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264657 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264691 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.274466 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.277819 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs" (OuterVolumeSpecName: "logs") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.281544 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.281743 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5" (OuterVolumeSpecName: "kube-api-access-rxxd5") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "kube-api-access-rxxd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.309913 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data" (OuterVolumeSpecName: "config-data") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.357011 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts" (OuterVolumeSpecName: "scripts") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378012 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378085 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378124 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378217 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378308 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378709 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378726 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378735 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378742 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378750 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.379125 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs" (OuterVolumeSpecName: "logs") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.388768 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5" (OuterVolumeSpecName: "kube-api-access-sjtj5") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "kube-api-access-sjtj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.403780 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.419608 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data" (OuterVolumeSpecName: "config-data") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.433215 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts" (OuterVolumeSpecName: "scripts") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.479961 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.479998 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.480008 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.480017 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.480028 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.575971 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.616501 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.643813 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.818696 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" path="/var/lib/kubelet/pods/b0cd5b2d-6321-4992-be2e-5926f77e0790/volumes" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.965743 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerDied","Data":"d010c875444d4ba584246f10f1a99b66845b15ef9ef3b2384373a0f15b7f64f0"} Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.966041 4949 scope.go:117] "RemoveContainer" containerID="1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.966149 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.983313 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerStarted","Data":"531ef9b2d0a73d666ccd1d688de60c4cf9fa28d82e193bd4da5c061157252815"} Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.990490 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerStarted","Data":"a21906d0080002b80059b53f2425557bdbad30ceb5fad9442aae6757d6585801"} Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:46.998581 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.014733 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.027583 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerStarted","Data":"052fba35240bac70130e0cfdaa3376b77a051a12b8b97d00e1f00afe30ca5b57"} Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.053321 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerDied","Data":"b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f"} Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.053431 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.103318 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.112397 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.495753 4949 scope.go:117] "RemoveContainer" containerID="893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.532116 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.544670 4949 scope.go:117] "RemoveContainer" containerID="6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.756787 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.841387 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.974464 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.975724 4949 scope.go:117] "RemoveContainer" containerID="de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.031948 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.085199 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.086597 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.105903 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerStarted","Data":"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a"} Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.132972 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.997727787 podStartE2EDuration="7.132954856s" podCreationTimestamp="2026-01-20 15:07:41 +0000 UTC" firstStartedPulling="2026-01-20 15:07:42.619948494 +0000 UTC m=+1058.429779362" lastFinishedPulling="2026-01-20 15:07:46.755175573 +0000 UTC m=+1062.565006431" observedRunningTime="2026-01-20 15:07:48.120548509 +0000 UTC m=+1063.930379367" watchObservedRunningTime="2026-01-20 15:07:48.132954856 +0000 UTC m=+1063.942785714" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.134324 4949 generic.go:334] "Generic (PLEG): container finished" podID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerID="65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b" exitCode=0 Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.134389 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerDied","Data":"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b"} Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.134408 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerStarted","Data":"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b"} Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.135329 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.163419 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" podStartSLOduration=3.163405179 podStartE2EDuration="3.163405179s" podCreationTimestamp="2026-01-20 15:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:48.160443494 +0000 UTC m=+1063.970274352" watchObservedRunningTime="2026-01-20 15:07:48.163405179 +0000 UTC m=+1063.973236037" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.212828 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.273403 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.274146 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56bb6988d6-9n8x4" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-api" containerID="cri-o://bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48" gracePeriod=30 Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.274744 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56bb6988d6-9n8x4" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-httpd" containerID="cri-o://5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391" gracePeriod=30 Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.802744 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" path="/var/lib/kubelet/pods/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964/volumes" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.803376 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a77932-734e-416b-a182-5e84f6749d95" path="/var/lib/kubelet/pods/f1a77932-734e-416b-a182-5e84f6749d95/volumes" Jan 20 15:07:49 crc kubenswrapper[4949]: I0120 15:07:49.162203 4949 generic.go:334] "Generic (PLEG): container finished" podID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerID="5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391" exitCode=0 Jan 20 15:07:49 crc kubenswrapper[4949]: I0120 15:07:49.162285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerDied","Data":"5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391"} Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.094314 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.174410 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerStarted","Data":"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8"} Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.174593 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api-log" containerID="cri-o://f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" gracePeriod=30 Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.174881 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.175268 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api" containerID="cri-o://6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" gracePeriod=30 Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.199610 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerStarted","Data":"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461"} Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.199683 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerStarted","Data":"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603"} Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.231670 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.231647343 podStartE2EDuration="5.231647343s" podCreationTimestamp="2026-01-20 15:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:50.194731744 +0000 UTC m=+1066.004562622" watchObservedRunningTime="2026-01-20 15:07:50.231647343 +0000 UTC m=+1066.041478201" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.237287 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.6611253809999997 podStartE2EDuration="5.237264052s" podCreationTimestamp="2026-01-20 15:07:45 +0000 UTC" firstStartedPulling="2026-01-20 15:07:46.618535797 +0000 UTC m=+1062.428366655" lastFinishedPulling="2026-01-20 15:07:48.194674468 +0000 UTC m=+1064.004505326" observedRunningTime="2026-01-20 15:07:50.227018315 +0000 UTC m=+1066.036849173" watchObservedRunningTime="2026-01-20 15:07:50.237264052 +0000 UTC m=+1066.047094910" Jan 20 15:07:50 crc kubenswrapper[4949]: E0120 15:07:50.320462 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a6ab06_8876_4497_9f92_ad1d32c55d9c.slice/crio-conmon-f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a.scope\": RecentStats: unable to find data in memory cache]" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.496326 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.529156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.606364 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.607664 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon-log" containerID="cri-o://89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc" gracePeriod=30 Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.608085 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" containerID="cri-o://03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74" gracePeriod=30 Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.809298 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.016332 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210033 4949 generic.go:334] "Generic (PLEG): container finished" podID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" exitCode=0 Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210077 4949 generic.go:334] "Generic (PLEG): container finished" podID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" exitCode=143 Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210113 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210188 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerDied","Data":"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8"} Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210226 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerDied","Data":"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a"} Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210241 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerDied","Data":"531ef9b2d0a73d666ccd1d688de60c4cf9fa28d82e193bd4da5c061157252815"} Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210260 4949 scope.go:117] "RemoveContainer" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210385 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210488 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210631 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211038 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211152 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs" (OuterVolumeSpecName: "logs") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211571 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211663 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211709 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.213184 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.213387 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.227675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7" (OuterVolumeSpecName: "kube-api-access-zdvb7") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "kube-api-access-zdvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.227978 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.229503 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts" (OuterVolumeSpecName: "scripts") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.312633 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.316232 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.316259 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.316270 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.316282 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.335624 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data" (OuterVolumeSpecName: "config-data") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.362202 4949 scope.go:117] "RemoveContainer" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.381202 4949 scope.go:117] "RemoveContainer" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.381706 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": container with ID starting with 6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8 not found: ID does not exist" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.381765 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8"} err="failed to get container status \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": rpc error: code = NotFound desc = could not find container \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": container with ID starting with 6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8 not found: ID does not exist" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.381788 4949 scope.go:117] "RemoveContainer" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.382087 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": container with ID starting with f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a not found: ID does not exist" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382152 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a"} err="failed to get container status \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": rpc error: code = NotFound desc = could not find container \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": container with ID starting with f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a not found: ID does not exist" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382177 4949 scope.go:117] "RemoveContainer" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382406 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8"} err="failed to get container status \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": rpc error: code = NotFound desc = could not find container \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": container with ID starting with 6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8 not found: ID does not exist" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382423 4949 scope.go:117] "RemoveContainer" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382639 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a"} err="failed to get container status \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": rpc error: code = NotFound desc = could not find container \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": container with ID starting with f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a not found: ID does not exist" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.417746 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.516658 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.591411 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.600595 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.614615 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.614826 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" containerID="cri-o://5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" gracePeriod=30 Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.615166 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" containerID="cri-o://4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" gracePeriod=30 Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.632893 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": EOF" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633093 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633438 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633448 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633477 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="dnsmasq-dns" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633483 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="dnsmasq-dns" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633495 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633501 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633509 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633529 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633539 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633544 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633556 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="init" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633561 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="init" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633571 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633576 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api-log" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633584 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633590 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633734 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633761 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="dnsmasq-dns" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633777 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633787 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633804 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633822 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633835 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.634709 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.639478 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.640036 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.640603 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.660052 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723573 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723718 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72vn\" (UniqueName: \"kubernetes.io/projected/605e8425-f80d-4cd4-981d-afb431ec676f-kube-api-access-p72vn\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723798 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-scripts\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723856 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/605e8425-f80d-4cd4-981d-afb431ec676f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723946 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data-custom\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723973 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723993 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.724023 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/605e8425-f80d-4cd4-981d-afb431ec676f-logs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.724116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825719 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825809 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p72vn\" (UniqueName: \"kubernetes.io/projected/605e8425-f80d-4cd4-981d-afb431ec676f-kube-api-access-p72vn\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825848 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-scripts\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825886 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/605e8425-f80d-4cd4-981d-afb431ec676f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825919 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data-custom\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825937 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825950 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825973 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/605e8425-f80d-4cd4-981d-afb431ec676f-logs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.826017 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.826779 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/605e8425-f80d-4cd4-981d-afb431ec676f-logs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.826833 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/605e8425-f80d-4cd4-981d-afb431ec676f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.830587 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-scripts\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.831026 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.831565 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.838106 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.838199 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data-custom\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.845901 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72vn\" (UniqueName: \"kubernetes.io/projected/605e8425-f80d-4cd4-981d-afb431ec676f-kube-api-access-p72vn\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.847014 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.972580 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.228231 4949 generic.go:334] "Generic (PLEG): container finished" podID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerID="bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48" exitCode=0 Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.228292 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerDied","Data":"bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48"} Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.231863 4949 generic.go:334] "Generic (PLEG): container finished" podID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerID="5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" exitCode=143 Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.231906 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerDied","Data":"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a"} Jan 20 15:07:52 crc kubenswrapper[4949]: W0120 15:07:52.444395 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605e8425_f80d_4cd4_981d_afb431ec676f.slice/crio-f2eec2efa113e8a3b333cb888dbc4659e1a59c69b2cdf8dfd9019b4a047558da WatchSource:0}: Error finding container f2eec2efa113e8a3b333cb888dbc4659e1a59c69b2cdf8dfd9019b4a047558da: Status 404 returned error can't find the container with id f2eec2efa113e8a3b333cb888dbc4659e1a59c69b2cdf8dfd9019b4a047558da Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.446886 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.449147 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638473 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638539 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638567 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638601 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638684 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.645386 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.665083 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz" (OuterVolumeSpecName: "kube-api-access-5b4dz") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "kube-api-access-5b4dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.690232 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.692731 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config" (OuterVolumeSpecName: "config") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.712428 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.739995 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.740269 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.740285 4949 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.740295 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.740305 4949 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.805199 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" path="/var/lib/kubelet/pods/e2a6ab06-8876-4497-9f92-ad1d32c55d9c/volumes" Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.247727 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"605e8425-f80d-4cd4-981d-afb431ec676f","Type":"ContainerStarted","Data":"dbf190b8fc9200feb4f932ca6e3e11c3b22be8af4ed6ed1fb97690486206b2b0"} Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.247782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"605e8425-f80d-4cd4-981d-afb431ec676f","Type":"ContainerStarted","Data":"f2eec2efa113e8a3b333cb888dbc4659e1a59c69b2cdf8dfd9019b4a047558da"} Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.250342 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerDied","Data":"ddacbe5809c0f3426708e64d9337ca1ab93d7f38d1a8f505676198c5a7a916e0"} Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.250410 4949 scope.go:117] "RemoveContainer" containerID="5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391" Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.250453 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.280532 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.290990 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.310251 4949 scope.go:117] "RemoveContainer" containerID="bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.267554 4949 generic.go:334] "Generic (PLEG): container finished" podID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerID="03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74" exitCode=0 Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.267603 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerDied","Data":"03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74"} Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.270009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"605e8425-f80d-4cd4-981d-afb431ec676f","Type":"ContainerStarted","Data":"018e8f77b260f7ae0f2759d6b0154a2c7badd2338405087345879685208f320c"} Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.270117 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.291928 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.291906887 podStartE2EDuration="3.291906887s" podCreationTimestamp="2026-01-20 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:54.286410431 +0000 UTC m=+1070.096241289" watchObservedRunningTime="2026-01-20 15:07:54.291906887 +0000 UTC m=+1070.101737745" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.765050 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:36756->10.217.0.152:9311: read: connection reset by peer" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.765095 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:36772->10.217.0.152:9311: read: connection reset by peer" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.807371 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" path="/var/lib/kubelet/pods/98759ef1-a1b3-414c-8131-cbdb90833a60/volumes" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.831707 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.227795 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280802 4949 generic.go:334] "Generic (PLEG): container finished" podID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerID="4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" exitCode=0 Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280842 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280877 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerDied","Data":"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2"} Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerDied","Data":"0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0"} Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280949 4949 scope.go:117] "RemoveContainer" containerID="4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284489 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284672 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284763 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284814 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284837 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.285471 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs" (OuterVolumeSpecName: "logs") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.290689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.290757 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz" (OuterVolumeSpecName: "kube-api-access-rbdvz") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "kube-api-access-rbdvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.314266 4949 scope.go:117] "RemoveContainer" containerID="5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.315197 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.355711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data" (OuterVolumeSpecName: "config-data") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.380627 4949 scope.go:117] "RemoveContainer" containerID="4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" Jan 20 15:07:55 crc kubenswrapper[4949]: E0120 15:07:55.380965 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2\": container with ID starting with 4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2 not found: ID does not exist" containerID="4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.380999 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2"} err="failed to get container status \"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2\": rpc error: code = NotFound desc = could not find container \"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2\": container with ID starting with 4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2 not found: ID does not exist" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.381023 4949 scope.go:117] "RemoveContainer" containerID="5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" Jan 20 15:07:55 crc kubenswrapper[4949]: E0120 15:07:55.381349 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a\": container with ID starting with 5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a not found: ID does not exist" containerID="5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.381385 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a"} err="failed to get container status \"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a\": rpc error: code = NotFound desc = could not find container \"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a\": container with ID starting with 5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a not found: ID does not exist" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387747 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387796 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387813 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387826 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387838 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.458840 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.516098 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.517018 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="dnsmasq-dns" containerID="cri-o://28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" gracePeriod=10 Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.614924 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.624315 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.757259 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.803076 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.090637 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.200914 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.200998 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.201021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.201103 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.201151 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.213851 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm" (OuterVolumeSpecName: "kube-api-access-zqqrm") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "kube-api-access-zqqrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.253092 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.258717 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config" (OuterVolumeSpecName: "config") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.262036 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.265935 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.290582 4949 generic.go:334] "Generic (PLEG): container finished" podID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerID="28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" exitCode=0 Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.290777 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="cinder-scheduler" containerID="cri-o://ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" gracePeriod=30 Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.291715 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.291743 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerDied","Data":"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4"} Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.291774 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerDied","Data":"4302ab47c368b5674e528e1d7aae1b710a2dfee3ac0f8609de5205f31154a236"} Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.291814 4949 scope.go:117] "RemoveContainer" containerID="28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.292797 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="probe" containerID="cri-o://17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" gracePeriod=30 Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303281 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303453 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303534 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303677 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303752 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.326348 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.334667 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.344733 4949 scope.go:117] "RemoveContainer" containerID="75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.376462 4949 scope.go:117] "RemoveContainer" containerID="28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" Jan 20 15:07:56 crc kubenswrapper[4949]: E0120 15:07:56.377015 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4\": container with ID starting with 28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4 not found: ID does not exist" containerID="28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.377048 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4"} err="failed to get container status \"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4\": rpc error: code = NotFound desc = could not find container \"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4\": container with ID starting with 28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4 not found: ID does not exist" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.377072 4949 scope.go:117] "RemoveContainer" containerID="75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3" Jan 20 15:07:56 crc kubenswrapper[4949]: E0120 15:07:56.380672 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3\": container with ID starting with 75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3 not found: ID does not exist" containerID="75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.380730 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3"} err="failed to get container status \"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3\": rpc error: code = NotFound desc = could not find container \"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3\": container with ID starting with 75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3 not found: ID does not exist" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.798864 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" path="/var/lib/kubelet/pods/57c4987c-6ff4-4108-b5f9-6609525cf7ce/volumes" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.799653 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" path="/var/lib/kubelet/pods/f5bea5c0-8837-4f65-8bd5-40d0d8201410/volumes" Jan 20 15:07:57 crc kubenswrapper[4949]: I0120 15:07:57.302718 4949 generic.go:334] "Generic (PLEG): container finished" podID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerID="17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" exitCode=0 Jan 20 15:07:57 crc kubenswrapper[4949]: I0120 15:07:57.302869 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerDied","Data":"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603"} Jan 20 15:07:58 crc kubenswrapper[4949]: I0120 15:07:58.212890 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:58 crc kubenswrapper[4949]: I0120 15:07:58.236947 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:59 crc kubenswrapper[4949]: I0120 15:07:59.160134 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.094048 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208216 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208324 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208340 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208424 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208571 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208600 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208677 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.209141 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.214849 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz" (OuterVolumeSpecName: "kube-api-access-79gnz") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "kube-api-access-79gnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.232668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts" (OuterVolumeSpecName: "scripts") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.234792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244196 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244710 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-httpd" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244727 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-httpd" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244742 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244750 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244763 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244772 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244782 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-api" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244789 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-api" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244806 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="init" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244814 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="init" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244829 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="dnsmasq-dns" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244837 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="dnsmasq-dns" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244861 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="probe" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244870 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="probe" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244885 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="cinder-scheduler" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244893 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="cinder-scheduler" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245082 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-api" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245106 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-httpd" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245116 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245133 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="cinder-scheduler" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245141 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="dnsmasq-dns" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245153 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245165 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="probe" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245875 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.249051 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.249470 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-d2p9q" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.249645 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.286010 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.296576 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312579 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312702 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312744 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312776 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-722jr\" (UniqueName: \"kubernetes.io/projected/0b4f97ab-7425-4271-bd09-0e89073ebdc1-kube-api-access-722jr\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312826 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312837 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312847 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312855 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.339949 4949 generic.go:334] "Generic (PLEG): container finished" podID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerID="ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" exitCode=0 Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.339997 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerDied","Data":"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461"} Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.340030 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerDied","Data":"a21906d0080002b80059b53f2425557bdbad30ceb5fad9442aae6757d6585801"} Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.340051 4949 scope.go:117] "RemoveContainer" containerID="17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.340187 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.347255 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data" (OuterVolumeSpecName: "config-data") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.361659 4949 scope.go:117] "RemoveContainer" containerID="ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.377874 4949 scope.go:117] "RemoveContainer" containerID="17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.378212 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603\": container with ID starting with 17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603 not found: ID does not exist" containerID="17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.378243 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603"} err="failed to get container status \"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603\": rpc error: code = NotFound desc = could not find container \"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603\": container with ID starting with 17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603 not found: ID does not exist" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.378265 4949 scope.go:117] "RemoveContainer" containerID="ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.378611 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461\": container with ID starting with ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461 not found: ID does not exist" containerID="ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.378636 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461"} err="failed to get container status \"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461\": rpc error: code = NotFound desc = could not find container \"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461\": container with ID starting with ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461 not found: ID does not exist" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.413990 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-722jr\" (UniqueName: \"kubernetes.io/projected/0b4f97ab-7425-4271-bd09-0e89073ebdc1-kube-api-access-722jr\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.414064 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.414220 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.414264 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.414321 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.415213 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.419150 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.419598 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.433950 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-722jr\" (UniqueName: \"kubernetes.io/projected/0b4f97ab-7425-4271-bd09-0e89073ebdc1-kube-api-access-722jr\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.639355 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.699234 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.711548 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.730595 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.732306 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.736605 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.761353 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821386 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821583 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821672 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmcn\" (UniqueName: \"kubernetes.io/projected/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-kube-api-access-7vmcn\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821698 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.923786 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmcn\" (UniqueName: \"kubernetes.io/projected/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-kube-api-access-7vmcn\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.923841 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.923910 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.923957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.924015 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.924030 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.924122 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.929332 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.932569 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.933034 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.935320 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.939324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmcn\" (UniqueName: \"kubernetes.io/projected/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-kube-api-access-7vmcn\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.085441 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.132955 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.371984 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0b4f97ab-7425-4271-bd09-0e89073ebdc1","Type":"ContainerStarted","Data":"c7a72ff9f80b65b7d772678f9f48eba0e7b3c0592c04657cd483d698352d1657"} Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.675413 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.799430 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" path="/var/lib/kubelet/pods/0350a5c4-7eb7-42bb-a72e-28b120f08f7a/volumes" Jan 20 15:08:03 crc kubenswrapper[4949]: I0120 15:08:03.383220 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef233e09-2d4d-4f12-9adf-e1bab1dcd101","Type":"ContainerStarted","Data":"fee8ebe3b90a305ec63a24873f113d7350787b11e89a550377d36d1586d948fa"} Jan 20 15:08:03 crc kubenswrapper[4949]: I0120 15:08:03.383734 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef233e09-2d4d-4f12-9adf-e1bab1dcd101","Type":"ContainerStarted","Data":"861cf8d0f131b25fa20cc6f6dfeaa6a3808c767ca3d020fcd2dcb3d5149934a0"} Jan 20 15:08:04 crc kubenswrapper[4949]: I0120 15:08:04.074184 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 20 15:08:04 crc kubenswrapper[4949]: I0120 15:08:04.402956 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef233e09-2d4d-4f12-9adf-e1bab1dcd101","Type":"ContainerStarted","Data":"a3cccdfcec5171fe6a3030f71ccf8473a8b09577cf68f0985139d036ee099f7b"} Jan 20 15:08:04 crc kubenswrapper[4949]: I0120 15:08:04.814332 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.814314139 podStartE2EDuration="3.814314139s" podCreationTimestamp="2026-01-20 15:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:04.42250658 +0000 UTC m=+1080.232337468" watchObservedRunningTime="2026-01-20 15:08:04.814314139 +0000 UTC m=+1080.624145017" Jan 20 15:08:04 crc kubenswrapper[4949]: I0120 15:08:04.831661 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 20 15:08:07 crc kubenswrapper[4949]: I0120 15:08:07.085937 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 20 15:08:11 crc kubenswrapper[4949]: I0120 15:08:11.471149 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0b4f97ab-7425-4271-bd09-0e89073ebdc1","Type":"ContainerStarted","Data":"50529ebaac9b1823a7ed7a1416655dbdb97246909831f4e3f913f412d2233b2e"} Jan 20 15:08:11 crc kubenswrapper[4949]: I0120 15:08:11.490357 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.7498311960000001 podStartE2EDuration="10.490339313s" podCreationTimestamp="2026-01-20 15:08:01 +0000 UTC" firstStartedPulling="2026-01-20 15:08:02.154737621 +0000 UTC m=+1077.964568479" lastFinishedPulling="2026-01-20 15:08:10.895245738 +0000 UTC m=+1086.705076596" observedRunningTime="2026-01-20 15:08:11.485722305 +0000 UTC m=+1087.295553173" watchObservedRunningTime="2026-01-20 15:08:11.490339313 +0000 UTC m=+1087.300170171" Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.194035 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.312286 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.692662 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.693088 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="sg-core" containerID="cri-o://38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" gracePeriod=30 Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.693218 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="proxy-httpd" containerID="cri-o://8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" gracePeriod=30 Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.693291 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-notification-agent" containerID="cri-o://55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" gracePeriod=30 Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.693007 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-central-agent" containerID="cri-o://bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" gracePeriod=30 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.470061 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498827 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" exitCode=0 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498871 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" exitCode=2 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498880 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" exitCode=0 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498888 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" exitCode=0 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499134 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499158 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499187 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499346 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.530798 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.566868 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.586047 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.611509 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.612049 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612080 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} err="failed to get container status \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612105 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.612494 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612515 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} err="failed to get container status \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612552 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.612921 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612952 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} err="failed to get container status \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612971 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.613270 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.613289 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} err="failed to get container status \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.613303 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.613582 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} err="failed to get container status \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.613627 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614030 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} err="failed to get container status \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614047 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614416 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} err="failed to get container status \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614456 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614792 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} err="failed to get container status \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614845 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615193 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} err="failed to get container status \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615212 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615460 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} err="failed to get container status \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615477 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615801 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} err="failed to get container status \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615823 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616205 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} err="failed to get container status \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616228 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616572 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} err="failed to get container status \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616597 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616919 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} err="failed to get container status \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616963 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.617256 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} err="failed to get container status \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.617306 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.617700 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} err="failed to get container status \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.640949 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641039 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641077 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641106 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641150 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.642179 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.642259 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.647485 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts" (OuterVolumeSpecName: "scripts") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.647573 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd" (OuterVolumeSpecName: "kube-api-access-smchd") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "kube-api-access-smchd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.672725 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.739633 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data" (OuterVolumeSpecName: "config-data") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743429 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743459 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743472 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743481 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743488 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743496 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.763277 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.845221 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.863313 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.870775 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892126 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.892581 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-notification-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892600 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-notification-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.892621 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="proxy-httpd" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892629 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="proxy-httpd" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.892648 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-central-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892658 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-central-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.892669 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="sg-core" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892677 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="sg-core" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892875 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-notification-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892898 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="sg-core" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892911 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-central-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892925 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="proxy-httpd" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.894849 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.897164 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.898341 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.912832 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.052594 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.052868 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.052982 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.053090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.053173 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.053281 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.053363 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.154963 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155032 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155062 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155087 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155150 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155211 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155269 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155946 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.156459 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.159978 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.160094 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.161845 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.177739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.184039 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.248300 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.696682 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.802136 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" path="/var/lib/kubelet/pods/fe5dc0c3-1563-4605-81e6-2ed8a343353b/volumes" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.831354 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.831458 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:08:15 crc kubenswrapper[4949]: I0120 15:08:15.541862 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"001934a2249d5b368738c4a7af5d9dcb8380201f7480fa3d74bbff0f9ef72bdd"} Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.843369 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.845336 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.866961 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.969559 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.970840 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.973895 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.991834 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.992828 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.001037 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.009503 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.009607 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.038446 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.098227 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.099435 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.111505 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.112403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.112560 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.112600 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.113050 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.113149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.113184 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.115840 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.144328 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.162455 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.163771 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.165660 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.192116 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214632 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214768 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214865 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214890 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214976 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.215922 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.217814 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.229370 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.239138 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.257628 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.316583 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.316698 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.316754 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.316785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.317502 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.326781 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.347037 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.365275 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.367759 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.376002 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.377059 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.380672 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.408958 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.420138 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.420835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.420921 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.420950 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.421045 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.421633 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.439484 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.508635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.522748 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.522877 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.524250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.557953 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.578335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686"} Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.697926 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.996223 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.119896 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.276535 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.318266 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.345989 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.406646 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.588961 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xh75b" event={"ID":"6572b1b9-85e4-4ede-879f-754c173433d1","Type":"ContainerStarted","Data":"938e9ce45ea628368ef94bbc41df4467906d61f367eb62e455efc51fc6c3edfd"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.589009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xh75b" event={"ID":"6572b1b9-85e4-4ede-879f-754c173433d1","Type":"ContainerStarted","Data":"a8d6149801e53f89ca794ad797be9f1d0a8ff3695b514f9e3586708af1cd01cd"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.592796 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" event={"ID":"3187f0f3-7689-4faf-92cc-8d869ef8ecd9","Type":"ContainerStarted","Data":"3497bf33a1bcabee5fa530614095f63c3dcb57443d8a56ada4e3a7106a39c3f7"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.595164 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9780-account-create-update-7t5m4" event={"ID":"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c","Type":"ContainerStarted","Data":"a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.595215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9780-account-create-update-7t5m4" event={"ID":"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c","Type":"ContainerStarted","Data":"90e33aef6f809221cbaccfc6477d221f54e7cc54a22cd175bdd0f4330a197491"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.600958 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4ss7" event={"ID":"91c4f23f-5c92-4f03-a457-6fe5ddc27eec","Type":"ContainerStarted","Data":"44b49987fac35537104bf2f291fdb77295a57b63139f228ac18ebe5eddbb8915"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.603421 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8sgnq" event={"ID":"170f8463-ece8-42b9-944f-b4adcc22e897","Type":"ContainerStarted","Data":"ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.603466 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8sgnq" event={"ID":"170f8463-ece8-42b9-944f-b4adcc22e897","Type":"ContainerStarted","Data":"a33d27a89803603063a0c458e3327c6608fd5212c1eccd7b120b767979ad42f3"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.606128 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-xh75b" podStartSLOduration=2.606110376 podStartE2EDuration="2.606110376s" podCreationTimestamp="2026-01-20 15:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:18.60434799 +0000 UTC m=+1094.414178868" watchObservedRunningTime="2026-01-20 15:08:18.606110376 +0000 UTC m=+1094.415941234" Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.616722 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.622877 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-8sgnq" podStartSLOduration=2.622856241 podStartE2EDuration="2.622856241s" podCreationTimestamp="2026-01-20 15:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:18.619212785 +0000 UTC m=+1094.429043643" watchObservedRunningTime="2026-01-20 15:08:18.622856241 +0000 UTC m=+1094.432687099" Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.625002 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" event={"ID":"956eb935-630a-49f6-8b3e-e5053edea66b","Type":"ContainerStarted","Data":"b4c15550174f2091a960142ecaa27eda30a96f73c6c135490e1950a61b6d1a4f"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.641263 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9780-account-create-update-7t5m4" podStartSLOduration=2.64124564 podStartE2EDuration="2.64124564s" podCreationTimestamp="2026-01-20 15:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:18.635057672 +0000 UTC m=+1094.444888530" watchObservedRunningTime="2026-01-20 15:08:18.64124564 +0000 UTC m=+1094.451076498" Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.637539 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.639128 4949 generic.go:334] "Generic (PLEG): container finished" podID="956eb935-630a-49f6-8b3e-e5053edea66b" containerID="1234260b184752a89b6e70a1ae59d09a4b3f7d03f7fb974dc5afeaccba79232f" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.639186 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" event={"ID":"956eb935-630a-49f6-8b3e-e5053edea66b","Type":"ContainerDied","Data":"1234260b184752a89b6e70a1ae59d09a4b3f7d03f7fb974dc5afeaccba79232f"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.641011 4949 generic.go:334] "Generic (PLEG): container finished" podID="6572b1b9-85e4-4ede-879f-754c173433d1" containerID="938e9ce45ea628368ef94bbc41df4467906d61f367eb62e455efc51fc6c3edfd" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.641044 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xh75b" event={"ID":"6572b1b9-85e4-4ede-879f-754c173433d1","Type":"ContainerDied","Data":"938e9ce45ea628368ef94bbc41df4467906d61f367eb62e455efc51fc6c3edfd"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.642601 4949 generic.go:334] "Generic (PLEG): container finished" podID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" containerID="135008a156949889d1049508e72bc07f9183b62985200f63db1952335429a011" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.642651 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" event={"ID":"3187f0f3-7689-4faf-92cc-8d869ef8ecd9","Type":"ContainerDied","Data":"135008a156949889d1049508e72bc07f9183b62985200f63db1952335429a011"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.644302 4949 generic.go:334] "Generic (PLEG): container finished" podID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" containerID="a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.644358 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9780-account-create-update-7t5m4" event={"ID":"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c","Type":"ContainerDied","Data":"a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.646146 4949 generic.go:334] "Generic (PLEG): container finished" podID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" containerID="24dbf49c8beca72a4d37ee3920737a645e4fe60fe68139ee7aef223996ccfdb6" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.646230 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4ss7" event={"ID":"91c4f23f-5c92-4f03-a457-6fe5ddc27eec","Type":"ContainerDied","Data":"24dbf49c8beca72a4d37ee3920737a645e4fe60fe68139ee7aef223996ccfdb6"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.648236 4949 generic.go:334] "Generic (PLEG): container finished" podID="170f8463-ece8-42b9-944f-b4adcc22e897" containerID="ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.648280 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8sgnq" event={"ID":"170f8463-ece8-42b9-944f-b4adcc22e897","Type":"ContainerDied","Data":"ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d"} Jan 20 15:08:20 crc kubenswrapper[4949]: W0120 15:08:20.622839 4949 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9468e8c_1c76_4f4f_a3da_1cbc82ea418c.slice/crio-conmon-a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9468e8c_1c76_4f4f_a3da_1cbc82ea418c.slice/crio-conmon-a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd.scope: no such file or directory Jan 20 15:08:20 crc kubenswrapper[4949]: W0120 15:08:20.623226 4949 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9468e8c_1c76_4f4f_a3da_1cbc82ea418c.slice/crio-a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9468e8c_1c76_4f4f_a3da_1cbc82ea418c.slice/crio-a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd.scope: no such file or directory Jan 20 15:08:20 crc kubenswrapper[4949]: W0120 15:08:20.623251 4949 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f8463_ece8_42b9_944f_b4adcc22e897.slice/crio-conmon-ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f8463_ece8_42b9_944f_b4adcc22e897.slice/crio-conmon-ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d.scope: no such file or directory Jan 20 15:08:20 crc kubenswrapper[4949]: W0120 15:08:20.623290 4949 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f8463_ece8_42b9_944f_b4adcc22e897.slice/crio-ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f8463_ece8_42b9_944f_b4adcc22e897.slice/crio-ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d.scope: no such file or directory Jan 20 15:08:20 crc kubenswrapper[4949]: I0120 15:08:20.658742 4949 generic.go:334] "Generic (PLEG): container finished" podID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerID="89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc" exitCode=137 Jan 20 15:08:20 crc kubenswrapper[4949]: I0120 15:08:20.658969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerDied","Data":"89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc"} Jan 20 15:08:20 crc kubenswrapper[4949]: E0120 15:08:20.836776 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod706427a3_6d1f_4a5e_9b50_d84499daec46.slice/crio-89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc.scope\": RecentStats: unable to find data in memory cache]" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.066363 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105082 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105127 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105172 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105191 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105214 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105233 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105306 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.123248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs" (OuterVolumeSpecName: "logs") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.123553 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm" (OuterVolumeSpecName: "kube-api-access-58hhm") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "kube-api-access-58hhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.169774 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.217112 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.217135 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.217144 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.250047 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts" (OuterVolumeSpecName: "scripts") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.277317 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.305615 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data" (OuterVolumeSpecName: "config-data") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.336009 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.336033 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.336044 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.363151 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.393710 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.404047 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.418934 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.429553 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.441306 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.443549 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.466638 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.542835 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") pod \"956eb935-630a-49f6-8b3e-e5053edea66b\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.542888 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") pod \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.542935 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") pod \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.542968 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") pod \"956eb935-630a-49f6-8b3e-e5053edea66b\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543033 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") pod \"6572b1b9-85e4-4ede-879f-754c173433d1\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543058 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") pod \"6572b1b9-85e4-4ede-879f-754c173433d1\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543090 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") pod \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543137 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") pod \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543720 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "956eb935-630a-49f6-8b3e-e5053edea66b" (UID: "956eb935-630a-49f6-8b3e-e5053edea66b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.544144 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6572b1b9-85e4-4ede-879f-754c173433d1" (UID: "6572b1b9-85e4-4ede-879f-754c173433d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.545062 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" (UID: "c9468e8c-1c76-4f4f-a3da-1cbc82ea418c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.546761 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc" (OuterVolumeSpecName: "kube-api-access-qp9nc") pod "956eb935-630a-49f6-8b3e-e5053edea66b" (UID: "956eb935-630a-49f6-8b3e-e5053edea66b"). InnerVolumeSpecName "kube-api-access-qp9nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.547200 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz" (OuterVolumeSpecName: "kube-api-access-hcxcz") pod "c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" (UID: "c9468e8c-1c76-4f4f-a3da-1cbc82ea418c"). InnerVolumeSpecName "kube-api-access-hcxcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.547306 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91c4f23f-5c92-4f03-a457-6fe5ddc27eec" (UID: "91c4f23f-5c92-4f03-a457-6fe5ddc27eec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.547494 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl" (OuterVolumeSpecName: "kube-api-access-vbckl") pod "6572b1b9-85e4-4ede-879f-754c173433d1" (UID: "6572b1b9-85e4-4ede-879f-754c173433d1"). InnerVolumeSpecName "kube-api-access-vbckl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.548771 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz" (OuterVolumeSpecName: "kube-api-access-zpksz") pod "91c4f23f-5c92-4f03-a457-6fe5ddc27eec" (UID: "91c4f23f-5c92-4f03-a457-6fe5ddc27eec"). InnerVolumeSpecName "kube-api-access-zpksz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") pod \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645244 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") pod \"170f8463-ece8-42b9-944f-b4adcc22e897\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645295 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") pod \"170f8463-ece8-42b9-944f-b4adcc22e897\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645408 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") pod \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645876 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "170f8463-ece8-42b9-944f-b4adcc22e897" (UID: "170f8463-ece8-42b9-944f-b4adcc22e897"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645902 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3187f0f3-7689-4faf-92cc-8d869ef8ecd9" (UID: "3187f0f3-7689-4faf-92cc-8d869ef8ecd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646293 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646324 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646342 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646559 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646573 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646585 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646600 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646612 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646623 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646635 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.648903 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8" (OuterVolumeSpecName: "kube-api-access-ww9g8") pod "3187f0f3-7689-4faf-92cc-8d869ef8ecd9" (UID: "3187f0f3-7689-4faf-92cc-8d869ef8ecd9"). InnerVolumeSpecName "kube-api-access-ww9g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.658041 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj" (OuterVolumeSpecName: "kube-api-access-5g4dj") pod "170f8463-ece8-42b9-944f-b4adcc22e897" (UID: "170f8463-ece8-42b9-944f-b4adcc22e897"). InnerVolumeSpecName "kube-api-access-5g4dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.671161 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.671160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4ss7" event={"ID":"91c4f23f-5c92-4f03-a457-6fe5ddc27eec","Type":"ContainerDied","Data":"44b49987fac35537104bf2f291fdb77295a57b63139f228ac18ebe5eddbb8915"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.671510 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b49987fac35537104bf2f291fdb77295a57b63139f228ac18ebe5eddbb8915" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.672913 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8sgnq" event={"ID":"170f8463-ece8-42b9-944f-b4adcc22e897","Type":"ContainerDied","Data":"a33d27a89803603063a0c458e3327c6608fd5212c1eccd7b120b767979ad42f3"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.672991 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a33d27a89803603063a0c458e3327c6608fd5212c1eccd7b120b767979ad42f3" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.673085 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693170 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-central-agent" containerID="cri-o://ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686" gracePeriod=30 Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693282 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="proxy-httpd" containerID="cri-o://74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7" gracePeriod=30 Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693335 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="sg-core" containerID="cri-o://094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c" gracePeriod=30 Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693371 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-notification-agent" containerID="cri-o://96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c" gracePeriod=30 Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693715 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.698013 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" event={"ID":"956eb935-630a-49f6-8b3e-e5053edea66b","Type":"ContainerDied","Data":"b4c15550174f2091a960142ecaa27eda30a96f73c6c135490e1950a61b6d1a4f"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.698057 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4c15550174f2091a960142ecaa27eda30a96f73c6c135490e1950a61b6d1a4f" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.698135 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.701764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerDied","Data":"f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.701849 4949 scope.go:117] "RemoveContainer" containerID="03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.701986 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.711048 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xh75b" event={"ID":"6572b1b9-85e4-4ede-879f-754c173433d1","Type":"ContainerDied","Data":"a8d6149801e53f89ca794ad797be9f1d0a8ff3695b514f9e3586708af1cd01cd"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.711085 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8d6149801e53f89ca794ad797be9f1d0a8ff3695b514f9e3586708af1cd01cd" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.711152 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.718072 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.774025257 podStartE2EDuration="8.71805498s" podCreationTimestamp="2026-01-20 15:08:13 +0000 UTC" firstStartedPulling="2026-01-20 15:08:14.727119765 +0000 UTC m=+1090.536950623" lastFinishedPulling="2026-01-20 15:08:20.671149488 +0000 UTC m=+1096.480980346" observedRunningTime="2026-01-20 15:08:21.716714687 +0000 UTC m=+1097.526545555" watchObservedRunningTime="2026-01-20 15:08:21.71805498 +0000 UTC m=+1097.527885838" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.721130 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" event={"ID":"3187f0f3-7689-4faf-92cc-8d869ef8ecd9","Type":"ContainerDied","Data":"3497bf33a1bcabee5fa530614095f63c3dcb57443d8a56ada4e3a7106a39c3f7"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.721175 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3497bf33a1bcabee5fa530614095f63c3dcb57443d8a56ada4e3a7106a39c3f7" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.721258 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.725000 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9780-account-create-update-7t5m4" event={"ID":"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c","Type":"ContainerDied","Data":"90e33aef6f809221cbaccfc6477d221f54e7cc54a22cd175bdd0f4330a197491"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.725040 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e33aef6f809221cbaccfc6477d221f54e7cc54a22cd175bdd0f4330a197491" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.725112 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.748153 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.748184 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.834126 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.844725 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.939708 4949 scope.go:117] "RemoveContainer" containerID="89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc" Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733463 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerID="74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7" exitCode=0 Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733798 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerID="094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c" exitCode=2 Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733532 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7"} Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733841 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c"} Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733854 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c"} Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733810 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerID="96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c" exitCode=0 Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.797233 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" path="/var/lib/kubelet/pods/706427a3-6d1f-4a5e-9b50-d84499daec46/volumes" Jan 20 15:08:24 crc kubenswrapper[4949]: I0120 15:08:24.756696 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerID="ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686" exitCode=0 Jan 20 15:08:24 crc kubenswrapper[4949]: I0120 15:08:24.756776 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686"} Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.122383 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.298797 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.298910 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.298958 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.299010 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.299185 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.300196 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.300225 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.300820 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.301008 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.308127 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts" (OuterVolumeSpecName: "scripts") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.321756 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5" (OuterVolumeSpecName: "kube-api-access-hfmm5") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "kube-api-access-hfmm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.336700 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403227 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403295 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403314 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403334 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403352 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.459817 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.472376 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data" (OuterVolumeSpecName: "config-data") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.504641 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.504678 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.777459 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"001934a2249d5b368738c4a7af5d9dcb8380201f7480fa3d74bbff0f9ef72bdd"} Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.777875 4949 scope.go:117] "RemoveContainer" containerID="74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.777570 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.806033 4949 scope.go:117] "RemoveContainer" containerID="094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.816699 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.831632 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.838030 4949 scope.go:117] "RemoveContainer" containerID="96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850324 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850742 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850766 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850785 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850795 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850808 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850817 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850830 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="sg-core" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850838 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="sg-core" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850856 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="proxy-httpd" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850865 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="proxy-httpd" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850876 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956eb935-630a-49f6-8b3e-e5053edea66b" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850884 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="956eb935-630a-49f6-8b3e-e5053edea66b" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850897 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170f8463-ece8-42b9-944f-b4adcc22e897" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850905 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="170f8463-ece8-42b9-944f-b4adcc22e897" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850931 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-notification-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850940 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-notification-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850977 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon-log" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850986 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon-log" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850997 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6572b1b9-85e4-4ede-879f-754c173433d1" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851006 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6572b1b9-85e4-4ede-879f-754c173433d1" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.851019 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851027 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.851038 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-central-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851045 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-central-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851231 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="proxy-httpd" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851251 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851266 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-notification-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851278 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="956eb935-630a-49f6-8b3e-e5053edea66b" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851291 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851331 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon-log" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851345 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851356 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851367 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="170f8463-ece8-42b9-944f-b4adcc22e897" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851378 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="sg-core" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851388 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-central-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851399 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6572b1b9-85e4-4ede-879f-754c173433d1" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.854372 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.862349 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.862735 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.868549 4949 scope.go:117] "RemoveContainer" containerID="ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.868834 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012537 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012630 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012658 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012763 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012895 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012983 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114098 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114132 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114212 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114250 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114298 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114313 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.116316 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.120278 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.120451 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.122677 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.128485 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.142420 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.177250 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.632573 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.786452 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"361a157554a6c6e8ca30573fa06fb2376290235760dc3bb233c617496cbf7fb2"} Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.798440 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" path="/var/lib/kubelet/pods/ea823a04-f7e4-48d6-a4b3-19ad8779178d/volumes" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.930423 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.152025 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.152406 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.473000 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.474293 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.475966 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hc7rn" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.476233 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.482058 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.486384 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.550827 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.550889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.550970 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.551002 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.652243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.652494 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.652637 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.652750 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.656597 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.657127 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.665163 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.669042 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.790237 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.805935 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93"} Jan 20 15:08:28 crc kubenswrapper[4949]: I0120 15:08:28.265719 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:08:28 crc kubenswrapper[4949]: W0120 15:08:28.265742 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d68b174_da83_41e7_804c_68a858beedf7.slice/crio-1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b WatchSource:0}: Error finding container 1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b: Status 404 returned error can't find the container with id 1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b Jan 20 15:08:28 crc kubenswrapper[4949]: I0120 15:08:28.816499 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9"} Jan 20 15:08:28 crc kubenswrapper[4949]: I0120 15:08:28.817139 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d"} Jan 20 15:08:28 crc kubenswrapper[4949]: I0120 15:08:28.818050 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-845d4" event={"ID":"5d68b174-da83-41e7-804c-68a858beedf7","Type":"ContainerStarted","Data":"1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b"} Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.835937 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a"} Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836881 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836224 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="sg-core" containerID="cri-o://613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9" gracePeriod=30 Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836092 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-central-agent" containerID="cri-o://9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93" gracePeriod=30 Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836269 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="proxy-httpd" containerID="cri-o://67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a" gracePeriod=30 Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836285 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-notification-agent" containerID="cri-o://44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d" gracePeriod=30 Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.862502 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.415913428 podStartE2EDuration="5.862482373s" podCreationTimestamp="2026-01-20 15:08:25 +0000 UTC" firstStartedPulling="2026-01-20 15:08:26.640781761 +0000 UTC m=+1102.450612619" lastFinishedPulling="2026-01-20 15:08:30.087350706 +0000 UTC m=+1105.897181564" observedRunningTime="2026-01-20 15:08:30.86050121 +0000 UTC m=+1106.670332058" watchObservedRunningTime="2026-01-20 15:08:30.862482373 +0000 UTC m=+1106.672313231" Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845264 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerID="67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a" exitCode=0 Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845638 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerID="613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9" exitCode=2 Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845647 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerID="44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d" exitCode=0 Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845604 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a"} Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845674 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9"} Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845684 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d"} Jan 20 15:08:32 crc kubenswrapper[4949]: I0120 15:08:32.856112 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerID="9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93" exitCode=0 Jan 20 15:08:32 crc kubenswrapper[4949]: I0120 15:08:32.856185 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93"} Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.623823 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637337 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637429 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637481 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637734 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637887 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637926 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.638002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.638300 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.638724 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.638820 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.644874 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts" (OuterVolumeSpecName: "scripts") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.651704 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945" (OuterVolumeSpecName: "kube-api-access-mn945") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "kube-api-access-mn945". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.678382 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.739920 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.740185 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.740194 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.740202 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.766342 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.778403 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data" (OuterVolumeSpecName: "config-data") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.841675 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.841972 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.897020 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-845d4" event={"ID":"5d68b174-da83-41e7-804c-68a858beedf7","Type":"ContainerStarted","Data":"3f15d6945e44e9c6e53794e87d22474ffe01f158595e34834b396d2b04dfd49c"} Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.900475 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"361a157554a6c6e8ca30573fa06fb2376290235760dc3bb233c617496cbf7fb2"} Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.900539 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.900558 4949 scope.go:117] "RemoveContainer" containerID="67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.923798 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-845d4" podStartSLOduration=1.746305046 podStartE2EDuration="10.923778626s" podCreationTimestamp="2026-01-20 15:08:27 +0000 UTC" firstStartedPulling="2026-01-20 15:08:28.267638142 +0000 UTC m=+1104.077469000" lastFinishedPulling="2026-01-20 15:08:37.445111672 +0000 UTC m=+1113.254942580" observedRunningTime="2026-01-20 15:08:37.912559948 +0000 UTC m=+1113.722390836" watchObservedRunningTime="2026-01-20 15:08:37.923778626 +0000 UTC m=+1113.733609494" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.924599 4949 scope.go:117] "RemoveContainer" containerID="613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.941379 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.952673 4949 scope.go:117] "RemoveContainer" containerID="44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.957634 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.963979 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:37 crc kubenswrapper[4949]: E0120 15:08:37.964509 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="sg-core" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.964613 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="sg-core" Jan 20 15:08:37 crc kubenswrapper[4949]: E0120 15:08:37.964720 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="proxy-httpd" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.964802 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="proxy-httpd" Jan 20 15:08:37 crc kubenswrapper[4949]: E0120 15:08:37.964890 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-central-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.964966 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-central-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: E0120 15:08:37.965053 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-notification-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.965140 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-notification-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.965701 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="sg-core" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.965829 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-notification-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.965929 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="proxy-httpd" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.966013 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-central-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.970007 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.972568 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.974266 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.978695 4949 scope.go:117] "RemoveContainer" containerID="9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.978711 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.044771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.044824 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045068 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045111 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045286 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045394 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045510 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146153 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146390 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146604 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146684 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146922 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.147028 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.147203 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.147374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.147623 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.150454 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.150467 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.150557 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.151809 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.170049 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.292821 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.802844 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" path="/var/lib/kubelet/pods/8b066491-fdc4-49f7-8c15-9a9dd53d4e48/volumes" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.806299 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:38 crc kubenswrapper[4949]: W0120 15:08:38.808407 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaded75a0_687f_4b2c_a437_d170b095dfa1.slice/crio-4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf WatchSource:0}: Error finding container 4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf: Status 404 returned error can't find the container with id 4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.917152 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf"} Jan 20 15:08:40 crc kubenswrapper[4949]: I0120 15:08:40.956676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2"} Jan 20 15:08:41 crc kubenswrapper[4949]: I0120 15:08:41.965932 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd"} Jan 20 15:08:41 crc kubenswrapper[4949]: I0120 15:08:41.966471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6"} Jan 20 15:08:43 crc kubenswrapper[4949]: I0120 15:08:43.989261 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc"} Jan 20 15:08:43 crc kubenswrapper[4949]: I0120 15:08:43.989887 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:08:44 crc kubenswrapper[4949]: I0120 15:08:44.012864 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.06238062 podStartE2EDuration="7.012842975s" podCreationTimestamp="2026-01-20 15:08:37 +0000 UTC" firstStartedPulling="2026-01-20 15:08:38.811269304 +0000 UTC m=+1114.621100162" lastFinishedPulling="2026-01-20 15:08:42.761731659 +0000 UTC m=+1118.571562517" observedRunningTime="2026-01-20 15:08:44.006712149 +0000 UTC m=+1119.816543007" watchObservedRunningTime="2026-01-20 15:08:44.012842975 +0000 UTC m=+1119.822673843" Jan 20 15:08:48 crc kubenswrapper[4949]: I0120 15:08:48.023753 4949 generic.go:334] "Generic (PLEG): container finished" podID="5d68b174-da83-41e7-804c-68a858beedf7" containerID="3f15d6945e44e9c6e53794e87d22474ffe01f158595e34834b396d2b04dfd49c" exitCode=0 Jan 20 15:08:48 crc kubenswrapper[4949]: I0120 15:08:48.023840 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-845d4" event={"ID":"5d68b174-da83-41e7-804c-68a858beedf7","Type":"ContainerDied","Data":"3f15d6945e44e9c6e53794e87d22474ffe01f158595e34834b396d2b04dfd49c"} Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.429453 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.558929 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") pod \"5d68b174-da83-41e7-804c-68a858beedf7\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.559007 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") pod \"5d68b174-da83-41e7-804c-68a858beedf7\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.559066 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") pod \"5d68b174-da83-41e7-804c-68a858beedf7\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.559123 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") pod \"5d68b174-da83-41e7-804c-68a858beedf7\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.564286 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl" (OuterVolumeSpecName: "kube-api-access-xrrnl") pod "5d68b174-da83-41e7-804c-68a858beedf7" (UID: "5d68b174-da83-41e7-804c-68a858beedf7"). InnerVolumeSpecName "kube-api-access-xrrnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.566084 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts" (OuterVolumeSpecName: "scripts") pod "5d68b174-da83-41e7-804c-68a858beedf7" (UID: "5d68b174-da83-41e7-804c-68a858beedf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.586811 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data" (OuterVolumeSpecName: "config-data") pod "5d68b174-da83-41e7-804c-68a858beedf7" (UID: "5d68b174-da83-41e7-804c-68a858beedf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.609705 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d68b174-da83-41e7-804c-68a858beedf7" (UID: "5d68b174-da83-41e7-804c-68a858beedf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.661293 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.661331 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.661341 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.661351 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.048146 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-845d4" event={"ID":"5d68b174-da83-41e7-804c-68a858beedf7","Type":"ContainerDied","Data":"1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b"} Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.048188 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.048247 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.193274 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 15:08:50 crc kubenswrapper[4949]: E0120 15:08:50.193820 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d68b174-da83-41e7-804c-68a858beedf7" containerName="nova-cell0-conductor-db-sync" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.193846 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d68b174-da83-41e7-804c-68a858beedf7" containerName="nova-cell0-conductor-db-sync" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.194088 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d68b174-da83-41e7-804c-68a858beedf7" containerName="nova-cell0-conductor-db-sync" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.194877 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.198170 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.198506 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hc7rn" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.200477 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.272775 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.272819 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.272890 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkzp\" (UniqueName: \"kubernetes.io/projected/432760ec-2ef6-4335-a7ba-21a2d73ede73-kube-api-access-bkkzp\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.374701 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.374906 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkkzp\" (UniqueName: \"kubernetes.io/projected/432760ec-2ef6-4335-a7ba-21a2d73ede73-kube-api-access-bkkzp\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.375078 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.381198 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.392192 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.397616 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkkzp\" (UniqueName: \"kubernetes.io/projected/432760ec-2ef6-4335-a7ba-21a2d73ede73-kube-api-access-bkkzp\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.532287 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:51 crc kubenswrapper[4949]: I0120 15:08:50.998300 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 15:08:51 crc kubenswrapper[4949]: I0120 15:08:51.060263 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"432760ec-2ef6-4335-a7ba-21a2d73ede73","Type":"ContainerStarted","Data":"fb0cd78d6cdc88bb0f51b416b6d026b8d1a38e3560c300297e80481570b83afe"} Jan 20 15:08:52 crc kubenswrapper[4949]: I0120 15:08:52.069057 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"432760ec-2ef6-4335-a7ba-21a2d73ede73","Type":"ContainerStarted","Data":"659590ecd0960c2e3a29a1ea62dcc192e0e8d683680a4448d1a2f46b2fa0104c"} Jan 20 15:08:52 crc kubenswrapper[4949]: I0120 15:08:52.069449 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:52 crc kubenswrapper[4949]: I0120 15:08:52.096646 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.096630708 podStartE2EDuration="2.096630708s" podCreationTimestamp="2026-01-20 15:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:52.089443639 +0000 UTC m=+1127.899274497" watchObservedRunningTime="2026-01-20 15:08:52.096630708 +0000 UTC m=+1127.906461566" Jan 20 15:08:57 crc kubenswrapper[4949]: I0120 15:08:57.152184 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:08:57 crc kubenswrapper[4949]: I0120 15:08:57.152477 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:09:00 crc kubenswrapper[4949]: I0120 15:09:00.578295 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.143273 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.144402 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.167545 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.182545 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.192264 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.278280 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.280809 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.286154 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.294106 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.348587 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.348624 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.348677 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.348715 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.378237 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.385219 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.392847 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.393949 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.434335 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.435780 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.439888 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450736 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450788 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450813 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450881 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450895 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450915 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.457860 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.463090 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.468295 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.472852 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.519115 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.531573 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.532672 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.538860 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.551422 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560123 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560168 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560188 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560269 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560286 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560311 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560341 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560396 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560413 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560428 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560450 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560468 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.561205 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.570044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.572674 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.580506 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.582744 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.600875 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.613227 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.617116 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661704 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661738 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661760 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661810 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661828 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661863 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661879 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661897 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661933 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661971 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661984 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.662018 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.662037 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.663429 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.666794 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.669809 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.672349 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.672621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.674136 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.680572 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.681850 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.686432 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.688309 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.706964 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763154 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763388 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763433 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763460 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763504 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.764352 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.765082 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.771202 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.779941 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.787831 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.805797 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.940801 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.963925 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.984821 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.136386 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.372539 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.374009 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2edecf7f_fbdf_4907_ba28_33f70a58f37a.slice/crio-dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6 WatchSource:0}: Error finding container dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6: Status 404 returned error can't find the container with id dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6 Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.516106 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5364ff4f_3ee5_4577_b82c_0c094bd55125.slice/crio-bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8 WatchSource:0}: Error finding container bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8: Status 404 returned error can't find the container with id bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8 Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.519027 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.527395 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.529545 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.536408 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.536569 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.546946 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.622108 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.625103 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2db0feee_11b2_4926_a0c9_2b3f39743fa3.slice/crio-5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764 WatchSource:0}: Error finding container 5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764: Status 404 returned error can't find the container with id 5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764 Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.696693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.696871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.697078 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.697261 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.699273 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.714553 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd3dc9fa_0768_4d5d_bbe8_812388ebabf7.slice/crio-ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b WatchSource:0}: Error finding container ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b: Status 404 returned error can't find the container with id ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.798915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.798991 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.799011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.799085 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.804482 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.804689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.804687 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.806726 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.809626 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae2f6e22_4c5a_4d30_95a8_0cacc9f21791.slice/crio-23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c WatchSource:0}: Error finding container 23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c: Status 404 returned error can't find the container with id 23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.824364 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.921813 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.214951 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerStarted","Data":"dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.216238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerStarted","Data":"cf270cf5a5820c777ad79aaecd34efdf73ed36872e34df88df821f17776a6fb7"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.217102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7","Type":"ContainerStarted","Data":"ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.217999 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2db0feee-11b2-4926-a0c9-2b3f39743fa3","Type":"ContainerStarted","Data":"5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.219562 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pxxs" event={"ID":"5364ff4f-3ee5-4577-b82c-0c094bd55125","Type":"ContainerStarted","Data":"ea9b847c91449323272eebae1f55f6d7768779cc32907a027b2a7c8dfb6cb9ec"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.219593 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pxxs" event={"ID":"5364ff4f-3ee5-4577-b82c-0c094bd55125","Type":"ContainerStarted","Data":"bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.222958 4949 generic.go:334] "Generic (PLEG): container finished" podID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerID="b9e7253362065575b97f2ce8215072002f755dd1b51aa51ada8298fea676a78f" exitCode=0 Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.223044 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerDied","Data":"b9e7253362065575b97f2ce8215072002f755dd1b51aa51ada8298fea676a78f"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.223067 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerStarted","Data":"23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.253787 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4pxxs" podStartSLOduration=2.253769482 podStartE2EDuration="2.253769482s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:03.241855051 +0000 UTC m=+1139.051685909" watchObservedRunningTime="2026-01-20 15:09:03.253769482 +0000 UTC m=+1139.063600340" Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.378643 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.234128 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerStarted","Data":"044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7"} Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.234550 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.237650 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" event={"ID":"883cbf80-263a-4fc7-b962-147019f05553","Type":"ContainerStarted","Data":"8b0cc583724d3b927981b50c04490bc942db17a6a69e1c60b8114fe3f564f67a"} Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.237680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" event={"ID":"883cbf80-263a-4fc7-b962-147019f05553","Type":"ContainerStarted","Data":"c8b5ce4d167e29814242380f36417059a04a0bfe76bb946c5d3c88c545749a63"} Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.261731 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" podStartSLOduration=3.261655216 podStartE2EDuration="3.261655216s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:04.253843756 +0000 UTC m=+1140.063674614" watchObservedRunningTime="2026-01-20 15:09:04.261655216 +0000 UTC m=+1140.071486084" Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.278061 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" podStartSLOduration=2.278037539 podStartE2EDuration="2.278037539s" podCreationTimestamp="2026-01-20 15:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:04.269410633 +0000 UTC m=+1140.079241491" watchObservedRunningTime="2026-01-20 15:09:04.278037539 +0000 UTC m=+1140.087868397" Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.662713 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.674008 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.260743 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7","Type":"ContainerStarted","Data":"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e"} Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.262399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2db0feee-11b2-4926-a0c9-2b3f39743fa3","Type":"ContainerStarted","Data":"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac"} Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.262800 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" gracePeriod=30 Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.270692 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerStarted","Data":"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f"} Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.273777 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerStarted","Data":"d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32"} Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.290018 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.223171814 podStartE2EDuration="5.290000406s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="2026-01-20 15:09:02.717848057 +0000 UTC m=+1138.527678915" lastFinishedPulling="2026-01-20 15:09:05.784676659 +0000 UTC m=+1141.594507507" observedRunningTime="2026-01-20 15:09:06.275496842 +0000 UTC m=+1142.085327700" watchObservedRunningTime="2026-01-20 15:09:06.290000406 +0000 UTC m=+1142.099831254" Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.297281 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.140347948 podStartE2EDuration="5.297241037s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="2026-01-20 15:09:02.626944073 +0000 UTC m=+1138.436774931" lastFinishedPulling="2026-01-20 15:09:05.783837162 +0000 UTC m=+1141.593668020" observedRunningTime="2026-01-20 15:09:06.293225898 +0000 UTC m=+1142.103056756" watchObservedRunningTime="2026-01-20 15:09:06.297241037 +0000 UTC m=+1142.107071905" Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.942277 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.964972 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.283829 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerStarted","Data":"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086"} Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.283902 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-log" containerID="cri-o://7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" gracePeriod=30 Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.283942 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-metadata" containerID="cri-o://cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" gracePeriod=30 Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.287670 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerStarted","Data":"92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5"} Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.312603 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9090683 podStartE2EDuration="6.312583559s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="2026-01-20 15:09:02.379073804 +0000 UTC m=+1138.188904662" lastFinishedPulling="2026-01-20 15:09:05.782589073 +0000 UTC m=+1141.592419921" observedRunningTime="2026-01-20 15:09:07.311976739 +0000 UTC m=+1143.121807617" watchObservedRunningTime="2026-01-20 15:09:07.312583559 +0000 UTC m=+1143.122414417" Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.344057 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.753123017 podStartE2EDuration="6.344038224s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="2026-01-20 15:09:02.196543091 +0000 UTC m=+1138.006373939" lastFinishedPulling="2026-01-20 15:09:05.787458288 +0000 UTC m=+1141.597289146" observedRunningTime="2026-01-20 15:09:07.330734339 +0000 UTC m=+1143.140565227" watchObservedRunningTime="2026-01-20 15:09:07.344038224 +0000 UTC m=+1143.153869092" Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.885419 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.000354 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") pod \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.000711 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") pod \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.000874 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") pod \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.000983 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") pod \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.003022 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs" (OuterVolumeSpecName: "logs") pod "2edecf7f-fbdf-4907-ba28-33f70a58f37a" (UID: "2edecf7f-fbdf-4907-ba28-33f70a58f37a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.007353 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t" (OuterVolumeSpecName: "kube-api-access-k5n6t") pod "2edecf7f-fbdf-4907-ba28-33f70a58f37a" (UID: "2edecf7f-fbdf-4907-ba28-33f70a58f37a"). InnerVolumeSpecName "kube-api-access-k5n6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.033016 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data" (OuterVolumeSpecName: "config-data") pod "2edecf7f-fbdf-4907-ba28-33f70a58f37a" (UID: "2edecf7f-fbdf-4907-ba28-33f70a58f37a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.035733 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2edecf7f-fbdf-4907-ba28-33f70a58f37a" (UID: "2edecf7f-fbdf-4907-ba28-33f70a58f37a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.102550 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.102786 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.102799 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.102807 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.300034 4949 generic.go:334] "Generic (PLEG): container finished" podID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" exitCode=0 Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.300064 4949 generic.go:334] "Generic (PLEG): container finished" podID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" exitCode=143 Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.300897 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.305665 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerDied","Data":"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086"} Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.305746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerDied","Data":"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f"} Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.305767 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerDied","Data":"dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6"} Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.305787 4949 scope.go:117] "RemoveContainer" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.313715 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.366687 4949 scope.go:117] "RemoveContainer" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.415901 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.435323 4949 scope.go:117] "RemoveContainer" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" Jan 20 15:09:08 crc kubenswrapper[4949]: E0120 15:09:08.435821 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": container with ID starting with cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086 not found: ID does not exist" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.435859 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086"} err="failed to get container status \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": rpc error: code = NotFound desc = could not find container \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": container with ID starting with cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086 not found: ID does not exist" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.435884 4949 scope.go:117] "RemoveContainer" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" Jan 20 15:09:08 crc kubenswrapper[4949]: E0120 15:09:08.436072 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": container with ID starting with 7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f not found: ID does not exist" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.436095 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f"} err="failed to get container status \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": rpc error: code = NotFound desc = could not find container \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": container with ID starting with 7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f not found: ID does not exist" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.436110 4949 scope.go:117] "RemoveContainer" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.441589 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.442957 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086"} err="failed to get container status \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": rpc error: code = NotFound desc = could not find container \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": container with ID starting with cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086 not found: ID does not exist" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.443008 4949 scope.go:117] "RemoveContainer" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.447894 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f"} err="failed to get container status \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": rpc error: code = NotFound desc = could not find container \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": container with ID starting with 7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f not found: ID does not exist" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.451555 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:08 crc kubenswrapper[4949]: E0120 15:09:08.452003 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-log" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.452020 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-log" Jan 20 15:09:08 crc kubenswrapper[4949]: E0120 15:09:08.452035 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-metadata" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.452043 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-metadata" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.452238 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-metadata" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.452260 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-log" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.453412 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.467908 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.468207 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.485492 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.515802 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.515892 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.515994 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.516052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.516081 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.617272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.617966 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.617996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.618108 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.618168 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.618563 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.624311 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.627403 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.632168 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.657186 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.798421 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" path="/var/lib/kubelet/pods/2edecf7f-fbdf-4907-ba28-33f70a58f37a/volumes" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.802221 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:09 crc kubenswrapper[4949]: W0120 15:09:09.271688 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode612b892_650e_4f7e_b7f9_70abcd671b83.slice/crio-1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251 WatchSource:0}: Error finding container 1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251: Status 404 returned error can't find the container with id 1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251 Jan 20 15:09:09 crc kubenswrapper[4949]: I0120 15:09:09.276204 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:09 crc kubenswrapper[4949]: I0120 15:09:09.311245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerStarted","Data":"1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251"} Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.321026 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerStarted","Data":"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51"} Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.321615 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerStarted","Data":"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d"} Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.322767 4949 generic.go:334] "Generic (PLEG): container finished" podID="5364ff4f-3ee5-4577-b82c-0c094bd55125" containerID="ea9b847c91449323272eebae1f55f6d7768779cc32907a027b2a7c8dfb6cb9ec" exitCode=0 Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.322816 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pxxs" event={"ID":"5364ff4f-3ee5-4577-b82c-0c094bd55125","Type":"ContainerDied","Data":"ea9b847c91449323272eebae1f55f6d7768779cc32907a027b2a7c8dfb6cb9ec"} Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.339788 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.339765464 podStartE2EDuration="2.339765464s" podCreationTimestamp="2026-01-20 15:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:10.338211265 +0000 UTC m=+1146.148042123" watchObservedRunningTime="2026-01-20 15:09:10.339765464 +0000 UTC m=+1146.149596322" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.331157 4949 generic.go:334] "Generic (PLEG): container finished" podID="883cbf80-263a-4fc7-b962-147019f05553" containerID="8b0cc583724d3b927981b50c04490bc942db17a6a69e1c60b8114fe3f564f67a" exitCode=0 Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.331260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" event={"ID":"883cbf80-263a-4fc7-b962-147019f05553","Type":"ContainerDied","Data":"8b0cc583724d3b927981b50c04490bc942db17a6a69e1c60b8114fe3f564f67a"} Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.614765 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.615215 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.665348 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.783568 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") pod \"5364ff4f-3ee5-4577-b82c-0c094bd55125\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.783677 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") pod \"5364ff4f-3ee5-4577-b82c-0c094bd55125\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.783865 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") pod \"5364ff4f-3ee5-4577-b82c-0c094bd55125\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.783981 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") pod \"5364ff4f-3ee5-4577-b82c-0c094bd55125\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.791434 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8" (OuterVolumeSpecName: "kube-api-access-bxjj8") pod "5364ff4f-3ee5-4577-b82c-0c094bd55125" (UID: "5364ff4f-3ee5-4577-b82c-0c094bd55125"). InnerVolumeSpecName "kube-api-access-bxjj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.795126 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts" (OuterVolumeSpecName: "scripts") pod "5364ff4f-3ee5-4577-b82c-0c094bd55125" (UID: "5364ff4f-3ee5-4577-b82c-0c094bd55125"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.814098 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data" (OuterVolumeSpecName: "config-data") pod "5364ff4f-3ee5-4577-b82c-0c094bd55125" (UID: "5364ff4f-3ee5-4577-b82c-0c094bd55125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.816151 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5364ff4f-3ee5-4577-b82c-0c094bd55125" (UID: "5364ff4f-3ee5-4577-b82c-0c094bd55125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.887177 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.887207 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.887218 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.887226 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.942178 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.987072 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.993444 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.076360 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.076643 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="dnsmasq-dns" containerID="cri-o://4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" gracePeriod=10 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.087296 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.087804 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerName="kube-state-metrics" containerID="cri-o://62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.350814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pxxs" event={"ID":"5364ff4f-3ee5-4577-b82c-0c094bd55125","Type":"ContainerDied","Data":"bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8"} Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.351709 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.350834 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.353147 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerID="62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe" exitCode=2 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.354077 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290","Type":"ContainerDied","Data":"62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe"} Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.450156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.572617 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.572880 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" containerID="cri-o://d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.573342 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" containerID="cri-o://92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.595204 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": EOF" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.595350 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": EOF" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.606541 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.606745 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-log" containerID="cri-o://d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.607117 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-metadata" containerID="cri-o://872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.945102 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.954913 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010553 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") pod \"883cbf80-263a-4fc7-b962-147019f05553\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010597 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") pod \"883cbf80-263a-4fc7-b962-147019f05553\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010638 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") pod \"883cbf80-263a-4fc7-b962-147019f05553\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010688 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010726 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010761 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010880 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") pod \"883cbf80-263a-4fc7-b962-147019f05553\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010939 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010968 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.026657 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx" (OuterVolumeSpecName: "kube-api-access-42zvx") pod "883cbf80-263a-4fc7-b962-147019f05553" (UID: "883cbf80-263a-4fc7-b962-147019f05553"). InnerVolumeSpecName "kube-api-access-42zvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.037283 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl" (OuterVolumeSpecName: "kube-api-access-6t9zl") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "kube-api-access-6t9zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.038869 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts" (OuterVolumeSpecName: "scripts") pod "883cbf80-263a-4fc7-b962-147019f05553" (UID: "883cbf80-263a-4fc7-b962-147019f05553"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.059739 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883cbf80-263a-4fc7-b962-147019f05553" (UID: "883cbf80-263a-4fc7-b962-147019f05553"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.063340 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data" (OuterVolumeSpecName: "config-data") pod "883cbf80-263a-4fc7-b962-147019f05553" (UID: "883cbf80-263a-4fc7-b962-147019f05553"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.079309 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.092099 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.103821 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113653 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113713 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113725 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113748 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113764 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113781 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113794 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113806 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.139448 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.140102 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config" (OuterVolumeSpecName: "config") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.216633 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.271250 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.313249 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.318455 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") pod \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\" (UID: \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.324918 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt" (OuterVolumeSpecName: "kube-api-access-sncqt") pod "8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" (UID: "8b1e1042-2ebf-4d51-972d-8ebd6d8b4290"). InnerVolumeSpecName "kube-api-access-sncqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.371453 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290","Type":"ContainerDied","Data":"b6f194539b862d0ee8b6be35de75344541fb71d8b75e2a6809fe23930f272acc"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.371547 4949 scope.go:117] "RemoveContainer" containerID="62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.371665 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.380737 4949 generic.go:334] "Generic (PLEG): container finished" podID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerID="4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" exitCode=0 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.380828 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerDied","Data":"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.380856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerDied","Data":"052fba35240bac70130e0cfdaa3376b77a051a12b8b97d00e1f00afe30ca5b57"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.381193 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.391575 4949 generic.go:334] "Generic (PLEG): container finished" podID="bac9a094-8b7c-494a-9436-405785ad8097" containerID="d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32" exitCode=143 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.391649 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerDied","Data":"d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.398240 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" event={"ID":"883cbf80-263a-4fc7-b962-147019f05553","Type":"ContainerDied","Data":"c8b5ce4d167e29814242380f36417059a04a0bfe76bb946c5d3c88c545749a63"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.398289 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b5ce4d167e29814242380f36417059a04a0bfe76bb946c5d3c88c545749a63" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.398362 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.415502 4949 scope.go:117] "RemoveContainer" containerID="4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.434802 4949 generic.go:334] "Generic (PLEG): container finished" podID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" exitCode=0 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.434850 4949 generic.go:334] "Generic (PLEG): container finished" podID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" exitCode=143 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.437424 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.437719 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerDied","Data":"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.437749 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerDied","Data":"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.437760 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerDied","Data":"1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438630 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438700 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438784 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438840 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438906 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.443256 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.445286 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs" (OuterVolumeSpecName: "logs") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.459634 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq" (OuterVolumeSpecName: "kube-api-access-jr2zq") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "kube-api-access-jr2zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.469491 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.482550 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.496747 4949 scope.go:117] "RemoveContainer" containerID="65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.515979 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.522143 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.532451 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.532915 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="init" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.532942 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="init" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.532958 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="dnsmasq-dns" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.532967 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="dnsmasq-dns" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.532982 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-log" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.532990 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-log" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.532997 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerName="kube-state-metrics" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533005 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerName="kube-state-metrics" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.533019 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-metadata" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533027 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-metadata" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.533041 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5364ff4f-3ee5-4577-b82c-0c094bd55125" containerName="nova-manage" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533049 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5364ff4f-3ee5-4577-b82c-0c094bd55125" containerName="nova-manage" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.533081 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883cbf80-263a-4fc7-b962-147019f05553" containerName="nova-cell1-conductor-db-sync" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533089 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="883cbf80-263a-4fc7-b962-147019f05553" containerName="nova-cell1-conductor-db-sync" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533311 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5364ff4f-3ee5-4577-b82c-0c094bd55125" containerName="nova-manage" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533327 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-log" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533341 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="dnsmasq-dns" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533358 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-metadata" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533374 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="883cbf80-263a-4fc7-b962-147019f05553" containerName="nova-cell1-conductor-db-sync" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533389 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerName="kube-state-metrics" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.534104 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.536652 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.544215 4949 scope.go:117] "RemoveContainer" containerID="4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.544305 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.545456 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b\": container with ID starting with 4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b not found: ID does not exist" containerID="4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.545570 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.545568 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b"} err="failed to get container status \"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b\": rpc error: code = NotFound desc = could not find container \"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b\": container with ID starting with 4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.545603 4949 scope.go:117] "RemoveContainer" containerID="65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.546052 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b\": container with ID starting with 65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b not found: ID does not exist" containerID="65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.546075 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b"} err="failed to get container status \"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b\": rpc error: code = NotFound desc = could not find container \"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b\": container with ID starting with 65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.546117 4949 scope.go:117] "RemoveContainer" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.547072 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.547086 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.547095 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.547104 4949 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.548047 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.548085 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data" (OuterVolumeSpecName: "config-data") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.548178 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.553173 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.567355 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.578804 4949 scope.go:117] "RemoveContainer" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.578985 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.590571 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.648565 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8nt2\" (UniqueName: \"kubernetes.io/projected/4a8d0e18-297d-407d-8c7c-64555052b960-kube-api-access-h8nt2\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.648614 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.648909 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.648987 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.649034 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.649060 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.649133 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs459\" (UniqueName: \"kubernetes.io/projected/e19f25ae-0920-4573-9f2e-6447ca83e76c-kube-api-access-cs459\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.649196 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.661252 4949 scope.go:117] "RemoveContainer" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.661799 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": container with ID starting with 872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51 not found: ID does not exist" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.661858 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51"} err="failed to get container status \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": rpc error: code = NotFound desc = could not find container \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": container with ID starting with 872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51 not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.661892 4949 scope.go:117] "RemoveContainer" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.662276 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": container with ID starting with d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d not found: ID does not exist" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.662307 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d"} err="failed to get container status \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": rpc error: code = NotFound desc = could not find container \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": container with ID starting with d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.662330 4949 scope.go:117] "RemoveContainer" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.662595 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51"} err="failed to get container status \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": rpc error: code = NotFound desc = could not find container \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": container with ID starting with 872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51 not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.662651 4949 scope.go:117] "RemoveContainer" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.663092 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d"} err="failed to get container status \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": rpc error: code = NotFound desc = could not find container \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": container with ID starting with d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751053 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751136 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751168 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751242 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs459\" (UniqueName: \"kubernetes.io/projected/e19f25ae-0920-4573-9f2e-6447ca83e76c-kube-api-access-cs459\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751275 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8nt2\" (UniqueName: \"kubernetes.io/projected/4a8d0e18-297d-407d-8c7c-64555052b960-kube-api-access-h8nt2\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751301 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.764884 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.767506 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.768930 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.774795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.779162 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.779389 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8nt2\" (UniqueName: \"kubernetes.io/projected/4a8d0e18-297d-407d-8c7c-64555052b960-kube-api-access-h8nt2\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.786397 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs459\" (UniqueName: \"kubernetes.io/projected/e19f25ae-0920-4573-9f2e-6447ca83e76c-kube-api-access-cs459\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.824415 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.824902 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-central-agent" containerID="cri-o://c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2" gracePeriod=30 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.825617 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="proxy-httpd" containerID="cri-o://c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc" gracePeriod=30 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.825684 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="sg-core" containerID="cri-o://ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd" gracePeriod=30 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.826056 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-notification-agent" containerID="cri-o://aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6" gracePeriod=30 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.864005 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.876635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.884892 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.911589 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.917728 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.919946 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.923927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.929160 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.946631 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.063840 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.063887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.063998 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.064021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.064048 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168455 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168499 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168560 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168654 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.169126 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.176022 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.176885 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.178984 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.189080 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.302282 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.408630 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.446870 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a8d0e18-297d-407d-8c7c-64555052b960","Type":"ContainerStarted","Data":"86735712e59b167561983ba5424e8121510c647890a906b60418269a87464210"} Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.455927 4949 generic.go:334] "Generic (PLEG): container finished" podID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerID="c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc" exitCode=0 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.455976 4949 generic.go:334] "Generic (PLEG): container finished" podID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerID="ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd" exitCode=2 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.455985 4949 generic.go:334] "Generic (PLEG): container finished" podID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerID="c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2" exitCode=0 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.456031 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc"} Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.456055 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd"} Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.456065 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2"} Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.461963 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" containerID="cri-o://8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" gracePeriod=30 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.528632 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 15:09:14 crc kubenswrapper[4949]: W0120 15:09:14.551878 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19f25ae_0920_4573_9f2e_6447ca83e76c.slice/crio-ad5700433709aa9d496c8821a3ef96861d880cd620901905a7c0aed410929ef7 WatchSource:0}: Error finding container ad5700433709aa9d496c8821a3ef96861d880cd620901905a7c0aed410929ef7: Status 404 returned error can't find the container with id ad5700433709aa9d496c8821a3ef96861d880cd620901905a7c0aed410929ef7 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.800572 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" path="/var/lib/kubelet/pods/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290/volumes" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.801762 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" path="/var/lib/kubelet/pods/e612b892-650e-4f7e-b7f9-70abcd671b83/volumes" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.802658 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" path="/var/lib/kubelet/pods/f15e5c23-e5ed-49da-a675-b79a84acb3a5/volumes" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.860109 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.472076 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerStarted","Data":"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.474374 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerStarted","Data":"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.474399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerStarted","Data":"b5f7d7790b27ef22d09958a0de70361e54da69dea5bf5665160ed80a276f0768"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.476155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e19f25ae-0920-4573-9f2e-6447ca83e76c","Type":"ContainerStarted","Data":"906adccbd92e2f2c3210b92cbe740b10245ff7c3835196b47b628a62d451b219"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.476183 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e19f25ae-0920-4573-9f2e-6447ca83e76c","Type":"ContainerStarted","Data":"ad5700433709aa9d496c8821a3ef96861d880cd620901905a7c0aed410929ef7"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.476793 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.478791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a8d0e18-297d-407d-8c7c-64555052b960","Type":"ContainerStarted","Data":"2b4e856bc66062e5a4ea5bebd7e327c9999e21fbee84a97312b41e90a73f495a"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.479409 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.499350 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.499320872 podStartE2EDuration="2.499320872s" podCreationTimestamp="2026-01-20 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:15.495330044 +0000 UTC m=+1151.305160902" watchObservedRunningTime="2026-01-20 15:09:15.499320872 +0000 UTC m=+1151.309151780" Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.527479 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.132991257 podStartE2EDuration="2.527455361s" podCreationTimestamp="2026-01-20 15:09:13 +0000 UTC" firstStartedPulling="2026-01-20 15:09:14.420424979 +0000 UTC m=+1150.230255837" lastFinishedPulling="2026-01-20 15:09:14.814889083 +0000 UTC m=+1150.624719941" observedRunningTime="2026-01-20 15:09:15.514618012 +0000 UTC m=+1151.324448870" watchObservedRunningTime="2026-01-20 15:09:15.527455361 +0000 UTC m=+1151.337286239" Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.547768 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.54774993 podStartE2EDuration="2.54774993s" podCreationTimestamp="2026-01-20 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:15.532455701 +0000 UTC m=+1151.342286569" watchObservedRunningTime="2026-01-20 15:09:15.54774993 +0000 UTC m=+1151.357580788" Jan 20 15:09:16 crc kubenswrapper[4949]: E0120 15:09:16.944196 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 15:09:16 crc kubenswrapper[4949]: E0120 15:09:16.946071 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 15:09:16 crc kubenswrapper[4949]: E0120 15:09:16.947283 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 15:09:16 crc kubenswrapper[4949]: E0120 15:09:16.947342 4949 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.499471 4949 generic.go:334] "Generic (PLEG): container finished" podID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerID="aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6" exitCode=0 Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.499531 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6"} Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.630471 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755064 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755148 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755223 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755293 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755407 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755499 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.757597 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.757785 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.761965 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts" (OuterVolumeSpecName: "scripts") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.763336 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm" (OuterVolumeSpecName: "kube-api-access-jm8bm") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "kube-api-access-jm8bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.787845 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.838899 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.854340 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data" (OuterVolumeSpecName: "config-data") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857317 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857352 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857362 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857372 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857379 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857387 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857395 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.396591 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.473045 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") pod \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.473506 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") pod \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.473813 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") pod \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.479927 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng" (OuterVolumeSpecName: "kube-api-access-2j2ng") pod "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" (UID: "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7"). InnerVolumeSpecName "kube-api-access-2j2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.497594 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" (UID: "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.507442 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data" (OuterVolumeSpecName: "config-data") pod "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" (UID: "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.515440 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf"} Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.515493 4949 scope.go:117] "RemoveContainer" containerID="c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.515855 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.536101 4949 generic.go:334] "Generic (PLEG): container finished" podID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" exitCode=0 Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.536145 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7","Type":"ContainerDied","Data":"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e"} Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.536172 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7","Type":"ContainerDied","Data":"ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b"} Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.536218 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.570876 4949 scope.go:117] "RemoveContainer" containerID="ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.576544 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.576583 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.576597 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.583906 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.607615 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.607858 4949 scope.go:117] "RemoveContainer" containerID="aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.634153 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.640557 4949 scope.go:117] "RemoveContainer" containerID="c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.661394 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.677699 4949 scope.go:117] "RemoveContainer" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680463 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680816 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="sg-core" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680831 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="sg-core" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680841 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-central-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680847 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-central-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680854 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-notification-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680860 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-notification-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680870 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="proxy-httpd" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680876 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="proxy-httpd" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680899 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680905 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681076 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-notification-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681092 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="proxy-httpd" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681106 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681119 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="sg-core" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681128 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-central-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681938 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.686846 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.701412 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.714254 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.715652 4949 scope.go:117] "RemoveContainer" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.716066 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e\": container with ID starting with 8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e not found: ID does not exist" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.716111 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e"} err="failed to get container status \"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e\": rpc error: code = NotFound desc = could not find container \"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e\": container with ID starting with 8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e not found: ID does not exist" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.716567 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.720292 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.720629 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.720922 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.728467 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.779890 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.779948 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780203 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780280 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780321 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780367 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780590 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780813 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780931 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.801047 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" path="/var/lib/kubelet/pods/aded75a0-687f-4b2c-a437-d170b095dfa1/volumes" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.801941 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" path="/var/lib/kubelet/pods/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7/volumes" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882197 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882219 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882283 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882397 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882481 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882656 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882691 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.883221 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.883471 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.886764 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.886781 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.886909 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.888244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.888824 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.888965 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.889578 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.898096 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.899729 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.006129 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.038100 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.303358 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.303912 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 15:09:19 crc kubenswrapper[4949]: W0120 15:09:19.475189 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88614638_70cb_4bcf_a017_bb7dbe17f962.slice/crio-5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e WatchSource:0}: Error finding container 5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e: Status 404 returned error can't find the container with id 5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.475988 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.550426 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88614638-70cb-4bcf-a017-bb7dbe17f962","Type":"ContainerStarted","Data":"5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e"} Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.556510 4949 generic.go:334] "Generic (PLEG): container finished" podID="bac9a094-8b7c-494a-9436-405785ad8097" containerID="92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5" exitCode=0 Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.556651 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerDied","Data":"92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5"} Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.556680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerDied","Data":"cf270cf5a5820c777ad79aaecd34efdf73ed36872e34df88df821f17776a6fb7"} Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.556691 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf270cf5a5820c777ad79aaecd34efdf73ed36872e34df88df821f17776a6fb7" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.611121 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:19 crc kubenswrapper[4949]: W0120 15:09:19.613607 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb896cb7c_63d5_4b9d_af2c_bfb89b07100c.slice/crio-fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12 WatchSource:0}: Error finding container fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12: Status 404 returned error can't find the container with id fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12 Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.656946 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.817807 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") pod \"bac9a094-8b7c-494a-9436-405785ad8097\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.818002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") pod \"bac9a094-8b7c-494a-9436-405785ad8097\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.818644 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs" (OuterVolumeSpecName: "logs") pod "bac9a094-8b7c-494a-9436-405785ad8097" (UID: "bac9a094-8b7c-494a-9436-405785ad8097"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.818688 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") pod \"bac9a094-8b7c-494a-9436-405785ad8097\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.818726 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") pod \"bac9a094-8b7c-494a-9436-405785ad8097\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.819626 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.825747 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9" (OuterVolumeSpecName: "kube-api-access-nlbs9") pod "bac9a094-8b7c-494a-9436-405785ad8097" (UID: "bac9a094-8b7c-494a-9436-405785ad8097"). InnerVolumeSpecName "kube-api-access-nlbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.848367 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data" (OuterVolumeSpecName: "config-data") pod "bac9a094-8b7c-494a-9436-405785ad8097" (UID: "bac9a094-8b7c-494a-9436-405785ad8097"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.851433 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bac9a094-8b7c-494a-9436-405785ad8097" (UID: "bac9a094-8b7c-494a-9436-405785ad8097"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.923857 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.923891 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.923900 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.572572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88614638-70cb-4bcf-a017-bb7dbe17f962","Type":"ContainerStarted","Data":"14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e"} Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.579483 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.583653 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e"} Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.583725 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12"} Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.596929 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.59689828 podStartE2EDuration="2.59689828s" podCreationTimestamp="2026-01-20 15:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:20.592529671 +0000 UTC m=+1156.402360539" watchObservedRunningTime="2026-01-20 15:09:20.59689828 +0000 UTC m=+1156.406729178" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.618392 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.630836 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.638117 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:20 crc kubenswrapper[4949]: E0120 15:09:20.638702 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.638736 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" Jan 20 15:09:20 crc kubenswrapper[4949]: E0120 15:09:20.638768 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.638782 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.639100 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.639144 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.640791 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.643235 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.645952 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.735735 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.735810 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.735844 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.736359 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.798612 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac9a094-8b7c-494a-9436-405785ad8097" path="/var/lib/kubelet/pods/bac9a094-8b7c-494a-9436-405785ad8097/volumes" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.837921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.837980 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.838025 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.838452 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.838626 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.842573 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.856488 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.857503 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.986424 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:21 crc kubenswrapper[4949]: I0120 15:09:21.566438 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:21 crc kubenswrapper[4949]: W0120 15:09:21.567658 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bd6ec41_c953_4165_a562_7d02937f0974.slice/crio-4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc WatchSource:0}: Error finding container 4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc: Status 404 returned error can't find the container with id 4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc Jan 20 15:09:21 crc kubenswrapper[4949]: I0120 15:09:21.597773 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437"} Jan 20 15:09:21 crc kubenswrapper[4949]: I0120 15:09:21.599594 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerStarted","Data":"4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc"} Jan 20 15:09:22 crc kubenswrapper[4949]: I0120 15:09:22.609192 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db"} Jan 20 15:09:22 crc kubenswrapper[4949]: I0120 15:09:22.610824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerStarted","Data":"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08"} Jan 20 15:09:22 crc kubenswrapper[4949]: I0120 15:09:22.610914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerStarted","Data":"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c"} Jan 20 15:09:22 crc kubenswrapper[4949]: I0120 15:09:22.630393 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.630365924 podStartE2EDuration="2.630365924s" podCreationTimestamp="2026-01-20 15:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:22.626891123 +0000 UTC m=+1158.436722001" watchObservedRunningTime="2026-01-20 15:09:22.630365924 +0000 UTC m=+1158.440196792" Jan 20 15:09:23 crc kubenswrapper[4949]: I0120 15:09:23.894156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:23 crc kubenswrapper[4949]: I0120 15:09:23.900389 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.007635 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.302995 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.303043 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.625471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb"} Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.626475 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.669564 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.652027822 podStartE2EDuration="6.66954317s" podCreationTimestamp="2026-01-20 15:09:18 +0000 UTC" firstStartedPulling="2026-01-20 15:09:19.61637663 +0000 UTC m=+1155.426207488" lastFinishedPulling="2026-01-20 15:09:23.633891978 +0000 UTC m=+1159.443722836" observedRunningTime="2026-01-20 15:09:24.666843703 +0000 UTC m=+1160.476674561" watchObservedRunningTime="2026-01-20 15:09:24.66954317 +0000 UTC m=+1160.479374038" Jan 20 15:09:25 crc kubenswrapper[4949]: I0120 15:09:25.315697 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:09:25 crc kubenswrapper[4949]: I0120 15:09:25.315720 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.152461 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.152560 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.152618 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.153463 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.153571 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b" gracePeriod=600 Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.671469 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b" exitCode=0 Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.671979 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b"} Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.672005 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e"} Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.672021 4949 scope.go:117] "RemoveContainer" containerID="a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf" Jan 20 15:09:29 crc kubenswrapper[4949]: I0120 15:09:29.006845 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 15:09:29 crc kubenswrapper[4949]: I0120 15:09:29.052258 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 15:09:29 crc kubenswrapper[4949]: I0120 15:09:29.723375 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 15:09:30 crc kubenswrapper[4949]: I0120 15:09:30.987856 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:09:30 crc kubenswrapper[4949]: I0120 15:09:30.987933 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:09:32 crc kubenswrapper[4949]: I0120 15:09:32.070816 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 15:09:32 crc kubenswrapper[4949]: I0120 15:09:32.070884 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 15:09:34 crc kubenswrapper[4949]: I0120 15:09:34.311117 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 15:09:34 crc kubenswrapper[4949]: I0120 15:09:34.314156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 15:09:34 crc kubenswrapper[4949]: I0120 15:09:34.320487 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 15:09:34 crc kubenswrapper[4949]: I0120 15:09:34.742909 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.651225 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779307 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") pod \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779481 4949 generic.go:334] "Generic (PLEG): container finished" podID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerID="4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" exitCode=137 Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779563 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779578 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2db0feee-11b2-4926-a0c9-2b3f39743fa3","Type":"ContainerDied","Data":"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac"} Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779660 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2db0feee-11b2-4926-a0c9-2b3f39743fa3","Type":"ContainerDied","Data":"5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764"} Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779684 4949 scope.go:117] "RemoveContainer" containerID="4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.780282 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") pod \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.780387 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") pod \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.784565 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd" (OuterVolumeSpecName: "kube-api-access-msjgd") pod "2db0feee-11b2-4926-a0c9-2b3f39743fa3" (UID: "2db0feee-11b2-4926-a0c9-2b3f39743fa3"). InnerVolumeSpecName "kube-api-access-msjgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.803227 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data" (OuterVolumeSpecName: "config-data") pod "2db0feee-11b2-4926-a0c9-2b3f39743fa3" (UID: "2db0feee-11b2-4926-a0c9-2b3f39743fa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.804874 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2db0feee-11b2-4926-a0c9-2b3f39743fa3" (UID: "2db0feee-11b2-4926-a0c9-2b3f39743fa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.876470 4949 scope.go:117] "RemoveContainer" containerID="4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" Jan 20 15:09:36 crc kubenswrapper[4949]: E0120 15:09:36.877020 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac\": container with ID starting with 4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac not found: ID does not exist" containerID="4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.877067 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac"} err="failed to get container status \"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac\": rpc error: code = NotFound desc = could not find container \"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac\": container with ID starting with 4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac not found: ID does not exist" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.883022 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.883056 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.883070 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.109040 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.117879 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.140233 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: E0120 15:09:37.140987 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.141007 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.141275 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.142223 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.145343 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.145624 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.145852 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.149661 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289364 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289457 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289620 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzggd\" (UniqueName: \"kubernetes.io/projected/16e90cac-28e0-4d75-a613-d77c9263f634-kube-api-access-tzggd\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.391783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.391870 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.391907 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.392017 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzggd\" (UniqueName: \"kubernetes.io/projected/16e90cac-28e0-4d75-a613-d77c9263f634-kube-api-access-tzggd\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.392101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.397829 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.398275 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.399018 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.401236 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.412296 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzggd\" (UniqueName: \"kubernetes.io/projected/16e90cac-28e0-4d75-a613-d77c9263f634-kube-api-access-tzggd\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.464425 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.987084 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: W0120 15:09:37.999813 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16e90cac_28e0_4d75_a613_d77c9263f634.slice/crio-7fc45aac4e91f35ac3246faedc4050e2b58e2ec38d9d75dc20ccb31e5ad6df8b WatchSource:0}: Error finding container 7fc45aac4e91f35ac3246faedc4050e2b58e2ec38d9d75dc20ccb31e5ad6df8b: Status 404 returned error can't find the container with id 7fc45aac4e91f35ac3246faedc4050e2b58e2ec38d9d75dc20ccb31e5ad6df8b Jan 20 15:09:38 crc kubenswrapper[4949]: I0120 15:09:38.800632 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" path="/var/lib/kubelet/pods/2db0feee-11b2-4926-a0c9-2b3f39743fa3/volumes" Jan 20 15:09:38 crc kubenswrapper[4949]: I0120 15:09:38.808969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16e90cac-28e0-4d75-a613-d77c9263f634","Type":"ContainerStarted","Data":"8ec93c74147135c69c526ad1b2d444f4f7f6b480e4f2d0084776ed862dd750e9"} Jan 20 15:09:38 crc kubenswrapper[4949]: I0120 15:09:38.809009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16e90cac-28e0-4d75-a613-d77c9263f634","Type":"ContainerStarted","Data":"7fc45aac4e91f35ac3246faedc4050e2b58e2ec38d9d75dc20ccb31e5ad6df8b"} Jan 20 15:09:38 crc kubenswrapper[4949]: I0120 15:09:38.841627 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.841603586 podStartE2EDuration="1.841603586s" podCreationTimestamp="2026-01-20 15:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:38.838183037 +0000 UTC m=+1174.648013915" watchObservedRunningTime="2026-01-20 15:09:38.841603586 +0000 UTC m=+1174.651434464" Jan 20 15:09:40 crc kubenswrapper[4949]: I0120 15:09:40.993085 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 15:09:40 crc kubenswrapper[4949]: I0120 15:09:40.994188 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 15:09:40 crc kubenswrapper[4949]: I0120 15:09:40.999233 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 15:09:41 crc kubenswrapper[4949]: I0120 15:09:41.015483 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 15:09:41 crc kubenswrapper[4949]: I0120 15:09:41.841477 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 15:09:41 crc kubenswrapper[4949]: I0120 15:09:41.844328 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.035804 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.037221 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.049973 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161145 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161203 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161237 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161256 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161608 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263467 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263542 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263578 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263656 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.264731 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.264973 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.265025 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.265255 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.283717 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.365431 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.466612 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.932733 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:09:43 crc kubenswrapper[4949]: I0120 15:09:43.871281 4949 generic.go:334] "Generic (PLEG): container finished" podID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerID="70b240f3fe0404274eea1d589f15c7d987d02877fd9ababf09b1f0ab34e25351" exitCode=0 Jan 20 15:09:43 crc kubenswrapper[4949]: I0120 15:09:43.871506 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerDied","Data":"70b240f3fe0404274eea1d589f15c7d987d02877fd9ababf09b1f0ab34e25351"} Jan 20 15:09:43 crc kubenswrapper[4949]: I0120 15:09:43.872878 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerStarted","Data":"55f29ec4bac4ac4376fe3452c37cd668b9f8ffe67866fcc276100056a1141b3d"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.278297 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.279214 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-central-agent" containerID="cri-o://28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.279383 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="proxy-httpd" containerID="cri-o://8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.279355 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="sg-core" containerID="cri-o://dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.279407 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-notification-agent" containerID="cri-o://61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.292177 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.578478 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881256 4949 generic.go:334] "Generic (PLEG): container finished" podID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerID="8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" exitCode=0 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881285 4949 generic.go:334] "Generic (PLEG): container finished" podID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerID="dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" exitCode=2 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881293 4949 generic.go:334] "Generic (PLEG): container finished" podID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerID="28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" exitCode=0 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881324 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881347 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881356 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.882717 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" containerID="cri-o://036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.883473 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerStarted","Data":"3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.883499 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.883752 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" containerID="cri-o://108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.907693 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" podStartSLOduration=2.9076739099999998 podStartE2EDuration="2.90767391s" podCreationTimestamp="2026-01-20 15:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:44.905001664 +0000 UTC m=+1180.714832522" watchObservedRunningTime="2026-01-20 15:09:44.90767391 +0000 UTC m=+1180.717504768" Jan 20 15:09:45 crc kubenswrapper[4949]: I0120 15:09:45.893829 4949 generic.go:334] "Generic (PLEG): container finished" podID="8bd6ec41-c953-4165-a562-7d02937f0974" containerID="036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" exitCode=143 Jan 20 15:09:45 crc kubenswrapper[4949]: I0120 15:09:45.894088 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerDied","Data":"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c"} Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.465265 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.488700 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.556364 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661530 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661564 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661626 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661738 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661808 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661834 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661869 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.662712 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.662968 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.668383 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d" (OuterVolumeSpecName: "kube-api-access-mst5d") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "kube-api-access-mst5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.685604 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts" (OuterVolumeSpecName: "scripts") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.691907 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.715419 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.742663 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764421 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764469 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764482 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764490 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764498 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764506 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764537 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.766691 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data" (OuterVolumeSpecName: "config-data") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.865892 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.913910 4949 generic.go:334] "Generic (PLEG): container finished" podID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerID="61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" exitCode=0 Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.913993 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.914013 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437"} Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.914057 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12"} Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.914078 4949 scope.go:117] "RemoveContainer" containerID="8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.938106 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.956315 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.961673 4949 scope.go:117] "RemoveContainer" containerID="dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.976501 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994184 4949 scope.go:117] "RemoveContainer" containerID="61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994294 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:47 crc kubenswrapper[4949]: E0120 15:09:47.994622 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-notification-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994633 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-notification-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: E0120 15:09:47.994646 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="sg-core" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994652 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="sg-core" Jan 20 15:09:47 crc kubenswrapper[4949]: E0120 15:09:47.994663 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-central-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994669 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-central-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: E0120 15:09:47.994684 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="proxy-httpd" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994689 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="proxy-httpd" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994903 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-notification-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994912 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="proxy-httpd" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994923 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-central-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994935 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="sg-core" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.996414 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.005063 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.005319 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.005425 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.025172 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.044054 4949 scope.go:117] "RemoveContainer" containerID="28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.073338 4949 scope.go:117] "RemoveContainer" containerID="8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.074704 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb\": container with ID starting with 8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb not found: ID does not exist" containerID="8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.074746 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb"} err="failed to get container status \"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb\": rpc error: code = NotFound desc = could not find container \"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb\": container with ID starting with 8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.074794 4949 scope.go:117] "RemoveContainer" containerID="dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075664 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075716 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075786 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075810 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075829 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075879 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.076395 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db\": container with ID starting with dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db not found: ID does not exist" containerID="dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.076425 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db"} err="failed to get container status \"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db\": rpc error: code = NotFound desc = could not find container \"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db\": container with ID starting with dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.076446 4949 scope.go:117] "RemoveContainer" containerID="61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.076734 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437\": container with ID starting with 61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437 not found: ID does not exist" containerID="61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.076764 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437"} err="failed to get container status \"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437\": rpc error: code = NotFound desc = could not find container \"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437\": container with ID starting with 61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437 not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.076782 4949 scope.go:117] "RemoveContainer" containerID="28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.076979 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e\": container with ID starting with 28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e not found: ID does not exist" containerID="28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.077004 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e"} err="failed to get container status \"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e\": rpc error: code = NotFound desc = could not find container \"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e\": container with ID starting with 28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.146446 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.147780 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.156910 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.157103 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.159890 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.177448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.177894 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178400 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178453 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178539 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178592 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178645 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178727 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.179251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.179615 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.181717 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.182219 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.183076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.183765 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.185244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.195900 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.280771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.280979 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.281218 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.281258 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.381674 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.383875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.383946 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.384060 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.384091 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.389592 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.390578 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.393731 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.411509 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.484292 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.623762 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.688781 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.688852 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.688963 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.689055 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.690341 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs" (OuterVolumeSpecName: "logs") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.696831 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd" (OuterVolumeSpecName: "kube-api-access-b45bd") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974"). InnerVolumeSpecName "kube-api-access-b45bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.716369 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data podName:8bd6ec41-c953-4165-a562-7d02937f0974 nodeName:}" failed. No retries permitted until 2026-01-20 15:09:49.216343125 +0000 UTC m=+1185.026173983 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974") : error deleting /var/lib/kubelet/pods/8bd6ec41-c953-4165-a562-7d02937f0974/volume-subpaths: remove /var/lib/kubelet/pods/8bd6ec41-c953-4165-a562-7d02937f0974/volume-subpaths: no such file or directory Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.719459 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.791530 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.791559 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.791572 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.801345 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" path="/var/lib/kubelet/pods/b896cb7c-63d5-4b9d-af2c-bfb89b07100c/volumes" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.881742 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.930435 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"8f0dd94a9e63de42a5122bf4ccb941587cc9b12585cbfa4f431123811ef49ec3"} Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934580 4949 generic.go:334] "Generic (PLEG): container finished" podID="8bd6ec41-c953-4165-a562-7d02937f0974" containerID="108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" exitCode=0 Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934636 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934695 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerDied","Data":"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08"} Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934728 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerDied","Data":"4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc"} Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934748 4949 scope.go:117] "RemoveContainer" containerID="108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.958041 4949 scope.go:117] "RemoveContainer" containerID="036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.980435 4949 scope.go:117] "RemoveContainer" containerID="108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.980841 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08\": container with ID starting with 108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08 not found: ID does not exist" containerID="108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.980876 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08"} err="failed to get container status \"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08\": rpc error: code = NotFound desc = could not find container \"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08\": container with ID starting with 108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08 not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.980902 4949 scope.go:117] "RemoveContainer" containerID="036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.981289 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c\": container with ID starting with 036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c not found: ID does not exist" containerID="036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.981330 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c"} err="failed to get container status \"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c\": rpc error: code = NotFound desc = could not find container \"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c\": container with ID starting with 036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c not found: ID does not exist" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.049631 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:09:49 crc kubenswrapper[4949]: W0120 15:09:49.053743 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462eb38e_1d62_43e2_92c4_1074a1c054b9.slice/crio-3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5 WatchSource:0}: Error finding container 3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5: Status 404 returned error can't find the container with id 3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5 Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.300110 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.305374 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data" (OuterVolumeSpecName: "config-data") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.402970 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.571577 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.584262 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.602725 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:49 crc kubenswrapper[4949]: E0120 15:09:49.603119 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.603136 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" Jan 20 15:09:49 crc kubenswrapper[4949]: E0120 15:09:49.603162 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.603169 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.603318 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.603338 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.604402 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.609193 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.610212 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.610389 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.610487 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707605 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707858 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707876 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.809402 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.809718 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.809881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.809999 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.810137 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.810248 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.815387 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.819065 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.819474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.820174 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.820260 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.832091 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.922820 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.945532 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1"} Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.947449 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jgctz" event={"ID":"462eb38e-1d62-43e2-92c4-1074a1c054b9","Type":"ContainerStarted","Data":"4bacb42c86db9d32cafede00ec29f8308a27e34795cfa26fb587384d2da7e640"} Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.947495 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jgctz" event={"ID":"462eb38e-1d62-43e2-92c4-1074a1c054b9","Type":"ContainerStarted","Data":"3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5"} Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.967755 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jgctz" podStartSLOduration=1.9677324600000001 podStartE2EDuration="1.96773246s" podCreationTimestamp="2026-01-20 15:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:49.962361098 +0000 UTC m=+1185.772191966" watchObservedRunningTime="2026-01-20 15:09:49.96773246 +0000 UTC m=+1185.777563338" Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.398977 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:50 crc kubenswrapper[4949]: W0120 15:09:50.413416 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff5c5235_abcc_4af7_b9ee_c9eacb8c2104.slice/crio-1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513 WatchSource:0}: Error finding container 1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513: Status 404 returned error can't find the container with id 1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513 Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.807359 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" path="/var/lib/kubelet/pods/8bd6ec41-c953-4165-a562-7d02937f0974/volumes" Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.959419 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerStarted","Data":"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7"} Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.959470 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerStarted","Data":"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94"} Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.959484 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerStarted","Data":"1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513"} Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.962834 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e"} Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.980543 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.9805083300000002 podStartE2EDuration="1.98050833s" podCreationTimestamp="2026-01-20 15:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:50.977286807 +0000 UTC m=+1186.787117675" watchObservedRunningTime="2026-01-20 15:09:50.98050833 +0000 UTC m=+1186.790339188" Jan 20 15:09:52 crc kubenswrapper[4949]: I0120 15:09:52.367894 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:52 crc kubenswrapper[4949]: I0120 15:09:52.454862 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:52 crc kubenswrapper[4949]: I0120 15:09:52.455351 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="dnsmasq-dns" containerID="cri-o://044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7" gracePeriod=10 Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:52.999762 4949 generic.go:334] "Generic (PLEG): container finished" podID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerID="044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7" exitCode=0 Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.000089 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerDied","Data":"044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7"} Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.000119 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerDied","Data":"23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c"} Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.000131 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.052528 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086538 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086639 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086723 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086834 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086934 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.119708 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g" (OuterVolumeSpecName: "kube-api-access-l292g") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "kube-api-access-l292g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.161274 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.184846 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config" (OuterVolumeSpecName: "config") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.188606 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.188638 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.188648 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.200457 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.216467 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.295557 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.295895 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.011884 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a"} Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.011909 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.051095 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.059833 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.808647 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" path="/var/lib/kubelet/pods/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791/volumes" Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.021377 4949 generic.go:334] "Generic (PLEG): container finished" podID="462eb38e-1d62-43e2-92c4-1074a1c054b9" containerID="4bacb42c86db9d32cafede00ec29f8308a27e34795cfa26fb587384d2da7e640" exitCode=0 Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.021465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jgctz" event={"ID":"462eb38e-1d62-43e2-92c4-1074a1c054b9","Type":"ContainerDied","Data":"4bacb42c86db9d32cafede00ec29f8308a27e34795cfa26fb587384d2da7e640"} Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.025489 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6"} Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.025699 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.068686 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.440142102 podStartE2EDuration="8.068667185s" podCreationTimestamp="2026-01-20 15:09:47 +0000 UTC" firstStartedPulling="2026-01-20 15:09:48.892343859 +0000 UTC m=+1184.702174717" lastFinishedPulling="2026-01-20 15:09:54.520868912 +0000 UTC m=+1190.330699800" observedRunningTime="2026-01-20 15:09:55.061037822 +0000 UTC m=+1190.870868680" watchObservedRunningTime="2026-01-20 15:09:55.068667185 +0000 UTC m=+1190.878498043" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.458632 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.567003 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") pod \"462eb38e-1d62-43e2-92c4-1074a1c054b9\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.567083 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") pod \"462eb38e-1d62-43e2-92c4-1074a1c054b9\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.567135 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") pod \"462eb38e-1d62-43e2-92c4-1074a1c054b9\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.567244 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") pod \"462eb38e-1d62-43e2-92c4-1074a1c054b9\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.573065 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n" (OuterVolumeSpecName: "kube-api-access-bgv8n") pod "462eb38e-1d62-43e2-92c4-1074a1c054b9" (UID: "462eb38e-1d62-43e2-92c4-1074a1c054b9"). InnerVolumeSpecName "kube-api-access-bgv8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.573886 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts" (OuterVolumeSpecName: "scripts") pod "462eb38e-1d62-43e2-92c4-1074a1c054b9" (UID: "462eb38e-1d62-43e2-92c4-1074a1c054b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.593684 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462eb38e-1d62-43e2-92c4-1074a1c054b9" (UID: "462eb38e-1d62-43e2-92c4-1074a1c054b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.597708 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data" (OuterVolumeSpecName: "config-data") pod "462eb38e-1d62-43e2-92c4-1074a1c054b9" (UID: "462eb38e-1d62-43e2-92c4-1074a1c054b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.669754 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.669806 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.669825 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.669842 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.046511 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jgctz" event={"ID":"462eb38e-1d62-43e2-92c4-1074a1c054b9","Type":"ContainerDied","Data":"3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5"} Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.046598 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5" Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.046622 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.250013 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.250666 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerName="nova-scheduler-scheduler" containerID="cri-o://14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.268376 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.268664 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-log" containerID="cri-o://02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.268739 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-api" containerID="cri-o://d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.279967 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.280361 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" containerID="cri-o://62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.280650 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" containerID="cri-o://3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.929613 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992660 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992704 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992795 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992855 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992940 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992959 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.997143 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs" (OuterVolumeSpecName: "logs") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.002237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq" (OuterVolumeSpecName: "kube-api-access-k6lvq") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "kube-api-access-k6lvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.030686 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data" (OuterVolumeSpecName: "config-data") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.038382 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.081314 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.082889 4949 generic.go:334] "Generic (PLEG): container finished" podID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerID="3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" exitCode=143 Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.083090 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerDied","Data":"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.084237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.084744 4949 generic.go:334] "Generic (PLEG): container finished" podID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerID="14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e" exitCode=0 Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.084824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88614638-70cb-4bcf-a017-bb7dbe17f962","Type":"ContainerDied","Data":"14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086660 4949 generic.go:334] "Generic (PLEG): container finished" podID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" exitCode=0 Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086683 4949 generic.go:334] "Generic (PLEG): container finished" podID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" exitCode=143 Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086704 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerDied","Data":"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086728 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerDied","Data":"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerDied","Data":"1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086767 4949 scope.go:117] "RemoveContainer" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086922 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095019 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095061 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095070 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095080 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095091 4949 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095100 4949 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.110972 4949 scope.go:117] "RemoveContainer" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.136645 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.144648 4949 scope.go:117] "RemoveContainer" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.145318 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.145932 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": container with ID starting with d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7 not found: ID does not exist" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.146046 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7"} err="failed to get container status \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": rpc error: code = NotFound desc = could not find container \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": container with ID starting with d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7 not found: ID does not exist" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.146145 4949 scope.go:117] "RemoveContainer" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.147041 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": container with ID starting with 02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94 not found: ID does not exist" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.147358 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94"} err="failed to get container status \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": rpc error: code = NotFound desc = could not find container \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": container with ID starting with 02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94 not found: ID does not exist" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.147449 4949 scope.go:117] "RemoveContainer" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.148558 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.152246 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7"} err="failed to get container status \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": rpc error: code = NotFound desc = could not find container \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": container with ID starting with d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7 not found: ID does not exist" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.152507 4949 scope.go:117] "RemoveContainer" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.154070 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94"} err="failed to get container status \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": rpc error: code = NotFound desc = could not find container \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": container with ID starting with 02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94 not found: ID does not exist" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.158278 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159090 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462eb38e-1d62-43e2-92c4-1074a1c054b9" containerName="nova-manage" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159196 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="462eb38e-1d62-43e2-92c4-1074a1c054b9" containerName="nova-manage" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159282 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerName="nova-scheduler-scheduler" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159351 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerName="nova-scheduler-scheduler" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159437 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="init" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159507 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="init" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159633 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-log" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159706 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-log" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159787 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="dnsmasq-dns" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159854 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="dnsmasq-dns" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159933 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-api" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160011 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-api" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160367 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="dnsmasq-dns" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160454 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerName="nova-scheduler-scheduler" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160555 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-api" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160645 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-log" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160729 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="462eb38e-1d62-43e2-92c4-1074a1c054b9" containerName="nova-manage" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.162003 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.166653 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.167088 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.167344 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.172371 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.196548 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") pod \"88614638-70cb-4bcf-a017-bb7dbe17f962\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.196626 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") pod \"88614638-70cb-4bcf-a017-bb7dbe17f962\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.196759 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") pod \"88614638-70cb-4bcf-a017-bb7dbe17f962\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197053 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-public-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197109 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197135 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0174a61d-76ab-4198-91f1-d97291db561b-logs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197158 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197336 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-config-data\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197360 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrv9l\" (UniqueName: \"kubernetes.io/projected/0174a61d-76ab-4198-91f1-d97291db561b-kube-api-access-wrv9l\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.202489 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt" (OuterVolumeSpecName: "kube-api-access-28qkt") pod "88614638-70cb-4bcf-a017-bb7dbe17f962" (UID: "88614638-70cb-4bcf-a017-bb7dbe17f962"). InnerVolumeSpecName "kube-api-access-28qkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.239271 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88614638-70cb-4bcf-a017-bb7dbe17f962" (UID: "88614638-70cb-4bcf-a017-bb7dbe17f962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.243685 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data" (OuterVolumeSpecName: "config-data") pod "88614638-70cb-4bcf-a017-bb7dbe17f962" (UID: "88614638-70cb-4bcf-a017-bb7dbe17f962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.298748 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-config-data\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299131 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrv9l\" (UniqueName: \"kubernetes.io/projected/0174a61d-76ab-4198-91f1-d97291db561b-kube-api-access-wrv9l\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299175 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-public-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299227 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299251 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0174a61d-76ab-4198-91f1-d97291db561b-logs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299466 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299483 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299497 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.300073 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0174a61d-76ab-4198-91f1-d97291db561b-logs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.302340 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-public-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.302448 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.302729 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-config-data\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.303804 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.317584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrv9l\" (UniqueName: \"kubernetes.io/projected/0174a61d-76ab-4198-91f1-d97291db561b-kube-api-access-wrv9l\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.490899 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.801780 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" path="/var/lib/kubelet/pods/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104/volumes" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.924684 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.099272 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88614638-70cb-4bcf-a017-bb7dbe17f962","Type":"ContainerDied","Data":"5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e"} Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.099329 4949 scope.go:117] "RemoveContainer" containerID="14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.099376 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.104549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0174a61d-76ab-4198-91f1-d97291db561b","Type":"ContainerStarted","Data":"0f1a3cbee3a747e990ca06edb99c32d00380082af15dd8a2b11b3c13d5cf9118"} Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.128048 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.139094 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.149685 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.152187 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.155066 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.157435 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.217457 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.217507 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgwf\" (UniqueName: \"kubernetes.io/projected/51e2ed93-379c-457d-992a-57160c6be51a-kube-api-access-5wgwf\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.217660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-config-data\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.319477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-config-data\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.319930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.320055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgwf\" (UniqueName: \"kubernetes.io/projected/51e2ed93-379c-457d-992a-57160c6be51a-kube-api-access-5wgwf\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.322827 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-config-data\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.324074 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.343974 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgwf\" (UniqueName: \"kubernetes.io/projected/51e2ed93-379c-457d-992a-57160c6be51a-kube-api-access-5wgwf\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.471021 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.942100 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.117308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51e2ed93-379c-457d-992a-57160c6be51a","Type":"ContainerStarted","Data":"1aa29939e06abe06cc2f332dd6c17e842c249284962c999b67299a44bc656bed"} Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.121065 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0174a61d-76ab-4198-91f1-d97291db561b","Type":"ContainerStarted","Data":"80445cb9df2a5e3b1f4410a1f1a448edb39ca43b6b5334ff6ba2e31400d796fa"} Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.121093 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0174a61d-76ab-4198-91f1-d97291db561b","Type":"ContainerStarted","Data":"b6255c293efec89db417e280102d10fafdec207ec9f08f9eb79c188890c11e4b"} Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.138576 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.138559049 podStartE2EDuration="2.138559049s" podCreationTimestamp="2026-01-20 15:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:00.13579788 +0000 UTC m=+1195.945628748" watchObservedRunningTime="2026-01-20 15:10:00.138559049 +0000 UTC m=+1195.948389907" Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.430006 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": read tcp 10.217.0.2:58568->10.217.0.182:8775: read: connection reset by peer" Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.430030 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": read tcp 10.217.0.2:58570->10.217.0.182:8775: read: connection reset by peer" Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.804311 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" path="/var/lib/kubelet/pods/88614638-70cb-4bcf-a017-bb7dbe17f962/volumes" Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.970081 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.061603 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.061822 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.062752 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.063153 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.063235 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.063957 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs" (OuterVolumeSpecName: "logs") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.064839 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.086216 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4" (OuterVolumeSpecName: "kube-api-access-9bhx4") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "kube-api-access-9bhx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.098243 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data" (OuterVolumeSpecName: "config-data") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.101152 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.130667 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.132956 4949 generic.go:334] "Generic (PLEG): container finished" podID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerID="62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" exitCode=0 Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.133087 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.133777 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerDied","Data":"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea"} Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.133855 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerDied","Data":"b5f7d7790b27ef22d09958a0de70361e54da69dea5bf5665160ed80a276f0768"} Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.134090 4949 scope.go:117] "RemoveContainer" containerID="62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.141552 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51e2ed93-379c-457d-992a-57160c6be51a","Type":"ContainerStarted","Data":"73c3e5f12252f0ae1df4019f7f31f3c6e8336b71ed42105acb202980708f0f29"} Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.165132 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.165106969 podStartE2EDuration="2.165106969s" podCreationTimestamp="2026-01-20 15:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:01.158767817 +0000 UTC m=+1196.968598675" watchObservedRunningTime="2026-01-20 15:10:01.165106969 +0000 UTC m=+1196.974937827" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.167563 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.177651 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.177670 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.177685 4949 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.178381 4949 scope.go:117] "RemoveContainer" containerID="3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.211608 4949 scope.go:117] "RemoveContainer" containerID="62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.211755 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:01 crc kubenswrapper[4949]: E0120 15:10:01.213156 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea\": container with ID starting with 62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea not found: ID does not exist" containerID="62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.213186 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea"} err="failed to get container status \"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea\": rpc error: code = NotFound desc = could not find container \"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea\": container with ID starting with 62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea not found: ID does not exist" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.213206 4949 scope.go:117] "RemoveContainer" containerID="3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" Jan 20 15:10:01 crc kubenswrapper[4949]: E0120 15:10:01.214073 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80\": container with ID starting with 3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80 not found: ID does not exist" containerID="3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.214093 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80"} err="failed to get container status \"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80\": rpc error: code = NotFound desc = could not find container \"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80\": container with ID starting with 3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80 not found: ID does not exist" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.221707 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.227468 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:01 crc kubenswrapper[4949]: E0120 15:10:01.228170 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.228294 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" Jan 20 15:10:01 crc kubenswrapper[4949]: E0120 15:10:01.228379 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.228452 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.228762 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.228863 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.241430 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.252289 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.255139 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.256053 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280073 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvppw\" (UniqueName: \"kubernetes.io/projected/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-kube-api-access-kvppw\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280137 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-logs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280257 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280320 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-config-data\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381814 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvppw\" (UniqueName: \"kubernetes.io/projected/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-kube-api-access-kvppw\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381924 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-logs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381954 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-config-data\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.382664 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-logs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.386778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.386913 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-config-data\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.398211 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.399856 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvppw\" (UniqueName: \"kubernetes.io/projected/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-kube-api-access-kvppw\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.578743 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:10:02 crc kubenswrapper[4949]: I0120 15:10:02.099277 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:02 crc kubenswrapper[4949]: I0120 15:10:02.152265 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4185f7d0-b70a-4d49-82b9-e249bd1b2c48","Type":"ContainerStarted","Data":"7b03474a083f4d7dd7543d9c695ae60e2cd7f68dd98a3023a29b3b030a6b212a"} Jan 20 15:10:02 crc kubenswrapper[4949]: I0120 15:10:02.799856 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" path="/var/lib/kubelet/pods/42bf2757-50b8-4780-91b2-f0e4a62ea50c/volumes" Jan 20 15:10:03 crc kubenswrapper[4949]: I0120 15:10:03.164066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4185f7d0-b70a-4d49-82b9-e249bd1b2c48","Type":"ContainerStarted","Data":"36f442cdac5cb56104c4a45e5855152cc04b5dbb09628c44a5dcb3532627725e"} Jan 20 15:10:03 crc kubenswrapper[4949]: I0120 15:10:03.165170 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4185f7d0-b70a-4d49-82b9-e249bd1b2c48","Type":"ContainerStarted","Data":"b26d835b8f44d8929467bab7247ccdee72a079bf467b424921e1b09387dd45bb"} Jan 20 15:10:03 crc kubenswrapper[4949]: I0120 15:10:03.189906 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.189887725 podStartE2EDuration="2.189887725s" podCreationTimestamp="2026-01-20 15:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:03.183459109 +0000 UTC m=+1198.993290017" watchObservedRunningTime="2026-01-20 15:10:03.189887725 +0000 UTC m=+1198.999718583" Jan 20 15:10:04 crc kubenswrapper[4949]: I0120 15:10:04.471225 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 15:10:06 crc kubenswrapper[4949]: I0120 15:10:06.579068 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 15:10:06 crc kubenswrapper[4949]: I0120 15:10:06.579693 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 15:10:08 crc kubenswrapper[4949]: I0120 15:10:08.491296 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:10:08 crc kubenswrapper[4949]: I0120 15:10:08.491640 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:10:09 crc kubenswrapper[4949]: I0120 15:10:09.472686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 15:10:09 crc kubenswrapper[4949]: I0120 15:10:09.504110 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 15:10:09 crc kubenswrapper[4949]: I0120 15:10:09.505979 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0174a61d-76ab-4198-91f1-d97291db561b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:10:09 crc kubenswrapper[4949]: I0120 15:10:09.505904 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0174a61d-76ab-4198-91f1-d97291db561b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:10:10 crc kubenswrapper[4949]: I0120 15:10:10.261330 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 15:10:11 crc kubenswrapper[4949]: I0120 15:10:11.579321 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 15:10:11 crc kubenswrapper[4949]: I0120 15:10:11.579720 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 15:10:12 crc kubenswrapper[4949]: I0120 15:10:12.626883 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4185f7d0-b70a-4d49-82b9-e249bd1b2c48" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:10:12 crc kubenswrapper[4949]: I0120 15:10:12.626883 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4185f7d0-b70a-4d49-82b9-e249bd1b2c48" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.394786 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.506994 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.507789 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.513056 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.518599 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 15:10:19 crc kubenswrapper[4949]: I0120 15:10:19.318962 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 15:10:19 crc kubenswrapper[4949]: I0120 15:10:19.329048 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 15:10:21 crc kubenswrapper[4949]: I0120 15:10:21.588161 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 15:10:21 crc kubenswrapper[4949]: I0120 15:10:21.593986 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 15:10:21 crc kubenswrapper[4949]: I0120 15:10:21.603903 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 15:10:22 crc kubenswrapper[4949]: I0120 15:10:22.352811 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 15:10:30 crc kubenswrapper[4949]: I0120 15:10:30.992773 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:31 crc kubenswrapper[4949]: I0120 15:10:31.787119 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:35 crc kubenswrapper[4949]: I0120 15:10:35.111877 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" containerID="cri-o://4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" gracePeriod=604796 Jan 20 15:10:36 crc kubenswrapper[4949]: I0120 15:10:36.054144 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" containerID="cri-o://71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" gracePeriod=604796 Jan 20 15:10:37 crc kubenswrapper[4949]: I0120 15:10:37.159997 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 20 15:10:37 crc kubenswrapper[4949]: I0120 15:10:37.444189 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.683510 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850345 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850488 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850543 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850605 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850682 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850710 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850772 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850820 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850857 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850907 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.851775 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.852256 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.853654 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.856468 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info" (OuterVolumeSpecName: "pod-info") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.857980 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct" (OuterVolumeSpecName: "kube-api-access-pr5ct") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "kube-api-access-pr5ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.861500 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.861606 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.878747 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.913811 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data" (OuterVolumeSpecName: "config-data") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.922703 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf" (OuterVolumeSpecName: "server-conf") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.952754 4949 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953024 4949 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953113 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953200 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953292 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953372 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953460 4949 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953592 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953673 4949 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953745 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.975275 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.992633 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.055099 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.055137 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122669 4949 generic.go:334] "Generic (PLEG): container finished" podID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerID="4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" exitCode=0 Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122714 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerDied","Data":"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d"} Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122747 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerDied","Data":"3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57"} Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122765 4949 scope.go:117] "RemoveContainer" containerID="4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122914 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.163884 4949 scope.go:117] "RemoveContainer" containerID="ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.189501 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.199951 4949 scope.go:117] "RemoveContainer" containerID="4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" Jan 20 15:10:42 crc kubenswrapper[4949]: E0120 15:10:42.200444 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d\": container with ID starting with 4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d not found: ID does not exist" containerID="4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.200481 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d"} err="failed to get container status \"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d\": rpc error: code = NotFound desc = could not find container \"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d\": container with ID starting with 4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d not found: ID does not exist" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.200507 4949 scope.go:117] "RemoveContainer" containerID="ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131" Jan 20 15:10:42 crc kubenswrapper[4949]: E0120 15:10:42.200832 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131\": container with ID starting with ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131 not found: ID does not exist" containerID="ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.200910 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131"} err="failed to get container status \"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131\": rpc error: code = NotFound desc = could not find container \"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131\": container with ID starting with ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131 not found: ID does not exist" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.224937 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.240197 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:42 crc kubenswrapper[4949]: E0120 15:10:42.240725 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.240745 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" Jan 20 15:10:42 crc kubenswrapper[4949]: E0120 15:10:42.240757 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="setup-container" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.240762 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="setup-container" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.240921 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.242989 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.245113 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.245381 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.245620 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.245919 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.246276 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cpjq5" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.246469 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.251825 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.259870 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359730 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359767 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-config-data\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359988 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360038 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmxj\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-kube-api-access-tlmxj\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360314 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360338 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360370 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.461764 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462039 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-config-data\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462056 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462084 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462113 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462151 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462175 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmxj\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-kube-api-access-tlmxj\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462195 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462217 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462231 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462249 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463068 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-config-data\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463347 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463638 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463766 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463828 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.466438 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.466966 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.469171 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.474748 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.482299 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.487376 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmxj\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-kube-api-access-tlmxj\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.508938 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.585345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.594322 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775113 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775186 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775213 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775272 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775351 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775421 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775466 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775498 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775554 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775578 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775652 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.778329 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.778786 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.781582 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.784510 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.784537 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8" (OuterVolumeSpecName: "kube-api-access-vkpc8") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "kube-api-access-vkpc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.785431 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.788816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.790281 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info" (OuterVolumeSpecName: "pod-info") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.798636 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data" (OuterVolumeSpecName: "config-data") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.819446 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" path="/var/lib/kubelet/pods/cf4b5f65-52fe-4e8b-9d12-817e94e9b629/volumes" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.837696 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf" (OuterVolumeSpecName: "server-conf") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877748 4949 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877811 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877826 4949 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877851 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877891 4949 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877907 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877921 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877935 4949 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877973 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877985 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.880683 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.904901 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.979882 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.979918 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.122672 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139000 4949 generic.go:334] "Generic (PLEG): container finished" podID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerID="71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" exitCode=0 Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139042 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerDied","Data":"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281"} Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139068 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerDied","Data":"554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a"} Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139086 4949 scope.go:117] "RemoveContainer" containerID="71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139189 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.180933 4949 scope.go:117] "RemoveContainer" containerID="7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.191093 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.199137 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.223876 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: E0120 15:10:43.224348 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="setup-container" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.224363 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="setup-container" Jan 20 15:10:43 crc kubenswrapper[4949]: E0120 15:10:43.224376 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.224384 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.224621 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.225281 4949 scope.go:117] "RemoveContainer" containerID="71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.225694 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: E0120 15:10:43.228149 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281\": container with ID starting with 71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281 not found: ID does not exist" containerID="71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.228198 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281"} err="failed to get container status \"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281\": rpc error: code = NotFound desc = could not find container \"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281\": container with ID starting with 71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281 not found: ID does not exist" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.228226 4949 scope.go:117] "RemoveContainer" containerID="7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c" Jan 20 15:10:43 crc kubenswrapper[4949]: E0120 15:10:43.231778 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c\": container with ID starting with 7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c not found: ID does not exist" containerID="7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.231822 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c"} err="failed to get container status \"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c\": rpc error: code = NotFound desc = could not find container \"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c\": container with ID starting with 7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c not found: ID does not exist" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.232315 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.232500 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.232949 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.233187 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.233440 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.233684 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.233926 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2fdrl" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.243261 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.386948 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81813586-eebe-4c95-ad8b-433b8c501337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387400 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387484 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387604 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81813586-eebe-4c95-ad8b-433b8c501337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387655 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387695 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387757 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387854 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387911 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387954 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8qf\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-kube-api-access-xt8qf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489026 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489113 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81813586-eebe-4c95-ad8b-433b8c501337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489140 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489202 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489245 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489273 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489297 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8qf\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-kube-api-access-xt8qf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489325 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81813586-eebe-4c95-ad8b-433b8c501337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489549 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.490166 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.490879 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.490964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.491406 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.491643 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.494464 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.494777 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81813586-eebe-4c95-ad8b-433b8c501337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.494892 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81813586-eebe-4c95-ad8b-433b8c501337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.508578 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.513726 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8qf\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-kube-api-access-xt8qf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.539309 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.624553 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:44 crc kubenswrapper[4949]: I0120 15:10:44.054922 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:44 crc kubenswrapper[4949]: I0120 15:10:44.156344 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18d74874-b8f5-4706-abfe-c8d1cb7bb21b","Type":"ContainerStarted","Data":"8f209c95db3b06f8092c3d65f9cf16cb8bbd63aec85074513798ef6de863457b"} Jan 20 15:10:44 crc kubenswrapper[4949]: I0120 15:10:44.158878 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"81813586-eebe-4c95-ad8b-433b8c501337","Type":"ContainerStarted","Data":"fb7c1f66a1db9ea2f8ac89f67a24e86d02db7452cc7936bccad31e2d3c3fa80c"} Jan 20 15:10:44 crc kubenswrapper[4949]: I0120 15:10:44.812239 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" path="/var/lib/kubelet/pods/f3c1f546-0796-457f-8b06-a5ffd11e1b36/volumes" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.177599 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"81813586-eebe-4c95-ad8b-433b8c501337","Type":"ContainerStarted","Data":"ca4e5cc776d6975afb5c4e9ba101a0fcc325a77ea724d14d99aa36042a872478"} Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.179171 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18d74874-b8f5-4706-abfe-c8d1cb7bb21b","Type":"ContainerStarted","Data":"e08fb61215a8a289efb99aefcdf14e611881ced7e01ea67a6b22da694eb3e81c"} Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.596147 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.597573 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.599743 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.612221 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749402 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749460 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749604 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749628 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851017 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851348 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851454 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851482 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851537 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851553 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.852250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.852295 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.853705 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.854906 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.857925 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.878470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.975369 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:47 crc kubenswrapper[4949]: I0120 15:10:47.421561 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:10:47 crc kubenswrapper[4949]: W0120 15:10:47.425103 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4944a90_5076_4b63_8f86_749ad6555dbe.slice/crio-d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a WatchSource:0}: Error finding container d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a: Status 404 returned error can't find the container with id d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a Jan 20 15:10:48 crc kubenswrapper[4949]: I0120 15:10:48.200389 4949 generic.go:334] "Generic (PLEG): container finished" podID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerID="ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009" exitCode=0 Jan 20 15:10:48 crc kubenswrapper[4949]: I0120 15:10:48.200473 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerDied","Data":"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009"} Jan 20 15:10:48 crc kubenswrapper[4949]: I0120 15:10:48.200884 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerStarted","Data":"d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a"} Jan 20 15:10:49 crc kubenswrapper[4949]: I0120 15:10:49.213295 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerStarted","Data":"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86"} Jan 20 15:10:49 crc kubenswrapper[4949]: I0120 15:10:49.213601 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:49 crc kubenswrapper[4949]: I0120 15:10:49.245184 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" podStartSLOduration=3.245163247 podStartE2EDuration="3.245163247s" podCreationTimestamp="2026-01-20 15:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:49.233575816 +0000 UTC m=+1245.043406684" watchObservedRunningTime="2026-01-20 15:10:49.245163247 +0000 UTC m=+1245.054994115" Jan 20 15:10:56 crc kubenswrapper[4949]: I0120 15:10:56.976885 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.090004 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.090223 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" containerID="cri-o://3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107" gracePeriod=10 Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.294756 4949 generic.go:334] "Generic (PLEG): container finished" podID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerID="3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107" exitCode=0 Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.295008 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerDied","Data":"3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107"} Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.317493 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.318927 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.340700 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381636 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381856 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381885 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381932 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381957 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483401 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483473 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483551 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483569 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483592 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483609 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.484437 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.484602 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.484812 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.485041 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.485113 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.504762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.570121 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.636288 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686358 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686456 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686539 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686567 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.690264 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8" (OuterVolumeSpecName: "kube-api-access-wl5b8") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "kube-api-access-wl5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.748181 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config" (OuterVolumeSpecName: "config") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.760263 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.760940 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.767311 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789683 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789724 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789736 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789749 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789757 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:58 crc kubenswrapper[4949]: W0120 15:10:58.090406 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5fd960d_ae25_4d53_bf2e_c952c18f5c4e.slice/crio-28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92 WatchSource:0}: Error finding container 28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92: Status 404 returned error can't find the container with id 28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92 Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.090626 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.305592 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerDied","Data":"55f29ec4bac4ac4376fe3452c37cd668b9f8ffe67866fcc276100056a1141b3d"} Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.305867 4949 scope.go:117] "RemoveContainer" containerID="3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107" Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.305978 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.309782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerStarted","Data":"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854"} Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.309830 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerStarted","Data":"28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92"} Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.452280 4949 scope.go:117] "RemoveContainer" containerID="70b240f3fe0404274eea1d589f15c7d987d02877fd9ababf09b1f0ab34e25351" Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.483949 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.493356 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.801201 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" path="/var/lib/kubelet/pods/f0e49de8-75d6-4106-894c-b8b22ef6f279/volumes" Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.323743 4949 generic.go:334] "Generic (PLEG): container finished" podID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerID="8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854" exitCode=0 Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.323816 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerDied","Data":"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854"} Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.324065 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.324075 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerStarted","Data":"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e"} Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.370813 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" podStartSLOduration=2.370793812 podStartE2EDuration="2.370793812s" podCreationTimestamp="2026-01-20 15:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:59.338668466 +0000 UTC m=+1255.148499344" watchObservedRunningTime="2026-01-20 15:10:59.370793812 +0000 UTC m=+1255.180624670" Jan 20 15:11:02 crc kubenswrapper[4949]: I0120 15:11:02.366407 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Jan 20 15:11:07 crc kubenswrapper[4949]: I0120 15:11:07.638549 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:11:07 crc kubenswrapper[4949]: I0120 15:11:07.723122 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:11:07 crc kubenswrapper[4949]: I0120 15:11:07.723444 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="dnsmasq-dns" containerID="cri-o://8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" gracePeriod=10 Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.144468 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.194957 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195092 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195120 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195237 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195291 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195343 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.224049 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx" (OuterVolumeSpecName: "kube-api-access-kb5tx") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "kube-api-access-kb5tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.252005 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.259618 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.262086 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.266053 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.292914 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config" (OuterVolumeSpecName: "config") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298081 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298118 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298129 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298141 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298149 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298158 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417577 4949 generic.go:334] "Generic (PLEG): container finished" podID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerID="8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" exitCode=0 Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417638 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerDied","Data":"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86"} Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417687 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerDied","Data":"d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a"} Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417682 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417720 4949 scope.go:117] "RemoveContainer" containerID="8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.453780 4949 scope.go:117] "RemoveContainer" containerID="ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.461801 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.473158 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.481307 4949 scope.go:117] "RemoveContainer" containerID="8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" Jan 20 15:11:08 crc kubenswrapper[4949]: E0120 15:11:08.481795 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86\": container with ID starting with 8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86 not found: ID does not exist" containerID="8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.481828 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86"} err="failed to get container status \"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86\": rpc error: code = NotFound desc = could not find container \"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86\": container with ID starting with 8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86 not found: ID does not exist" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.481852 4949 scope.go:117] "RemoveContainer" containerID="ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009" Jan 20 15:11:08 crc kubenswrapper[4949]: E0120 15:11:08.482066 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009\": container with ID starting with ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009 not found: ID does not exist" containerID="ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.482089 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009"} err="failed to get container status \"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009\": rpc error: code = NotFound desc = could not find container \"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009\": container with ID starting with ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009 not found: ID does not exist" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.799266 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" path="/var/lib/kubelet/pods/c4944a90-5076-4b63-8f86-749ad6555dbe/volumes" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.392222 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:11:13 crc kubenswrapper[4949]: E0120 15:11:13.393354 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393375 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: E0120 15:11:13.393406 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="init" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393417 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="init" Jan 20 15:11:13 crc kubenswrapper[4949]: E0120 15:11:13.393437 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="init" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393449 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="init" Jan 20 15:11:13 crc kubenswrapper[4949]: E0120 15:11:13.393468 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393478 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393910 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393944 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.394879 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.399677 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.399843 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.399885 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.400656 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.409640 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.496938 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.497091 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.497254 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.497411 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.598326 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.598421 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.598464 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.598496 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.606877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.607197 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.614142 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.621324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.718766 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:14 crc kubenswrapper[4949]: I0120 15:11:14.283214 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:11:14 crc kubenswrapper[4949]: I0120 15:11:14.292907 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:11:14 crc kubenswrapper[4949]: I0120 15:11:14.477308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" event={"ID":"96f6253d-b990-4892-bd1f-9534caf70130","Type":"ContainerStarted","Data":"c5990735c43e33ef1cd5b76d611b48d5bc2594cebc427562820c71779de689dd"} Jan 20 15:11:18 crc kubenswrapper[4949]: I0120 15:11:18.514663 4949 generic.go:334] "Generic (PLEG): container finished" podID="81813586-eebe-4c95-ad8b-433b8c501337" containerID="ca4e5cc776d6975afb5c4e9ba101a0fcc325a77ea724d14d99aa36042a872478" exitCode=0 Jan 20 15:11:18 crc kubenswrapper[4949]: I0120 15:11:18.514757 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"81813586-eebe-4c95-ad8b-433b8c501337","Type":"ContainerDied","Data":"ca4e5cc776d6975afb5c4e9ba101a0fcc325a77ea724d14d99aa36042a872478"} Jan 20 15:11:18 crc kubenswrapper[4949]: I0120 15:11:18.518737 4949 generic.go:334] "Generic (PLEG): container finished" podID="18d74874-b8f5-4706-abfe-c8d1cb7bb21b" containerID="e08fb61215a8a289efb99aefcdf14e611881ced7e01ea67a6b22da694eb3e81c" exitCode=0 Jan 20 15:11:18 crc kubenswrapper[4949]: I0120 15:11:18.518825 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18d74874-b8f5-4706-abfe-c8d1cb7bb21b","Type":"ContainerDied","Data":"e08fb61215a8a289efb99aefcdf14e611881ced7e01ea67a6b22da694eb3e81c"} Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.578296 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"81813586-eebe-4c95-ad8b-433b8c501337","Type":"ContainerStarted","Data":"73645e53e2d63ae130e7c318b3aba4707b845ef89697cee9c55e6d29581b4bab"} Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.580590 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18d74874-b8f5-4706-abfe-c8d1cb7bb21b","Type":"ContainerStarted","Data":"911b238238730f1c23ad133a7d88638fd9cc43d5cae357f9bfc96af368b0f4d5"} Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.580834 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.584213 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" event={"ID":"96f6253d-b990-4892-bd1f-9534caf70130","Type":"ContainerStarted","Data":"f219443796223b8979c4f2b7127d3bca2a123adb4cdd20183cbd06a84853e4d3"} Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.603414 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.603395906 podStartE2EDuration="41.603395906s" podCreationTimestamp="2026-01-20 15:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:11:24.603007143 +0000 UTC m=+1280.412838011" watchObservedRunningTime="2026-01-20 15:11:24.603395906 +0000 UTC m=+1280.413226764" Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.627645 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" podStartSLOduration=2.529063213 podStartE2EDuration="11.627627411s" podCreationTimestamp="2026-01-20 15:11:13 +0000 UTC" firstStartedPulling="2026-01-20 15:11:14.292405298 +0000 UTC m=+1270.102236196" lastFinishedPulling="2026-01-20 15:11:23.390969536 +0000 UTC m=+1279.200800394" observedRunningTime="2026-01-20 15:11:24.62295407 +0000 UTC m=+1280.432784928" watchObservedRunningTime="2026-01-20 15:11:24.627627411 +0000 UTC m=+1280.437458269" Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.662098 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.662081811 podStartE2EDuration="42.662081811s" podCreationTimestamp="2026-01-20 15:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:11:24.659490718 +0000 UTC m=+1280.469321596" watchObservedRunningTime="2026-01-20 15:11:24.662081811 +0000 UTC m=+1280.471912669" Jan 20 15:11:27 crc kubenswrapper[4949]: I0120 15:11:27.152383 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:11:27 crc kubenswrapper[4949]: I0120 15:11:27.153241 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:11:33 crc kubenswrapper[4949]: I0120 15:11:33.625928 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:11:33 crc kubenswrapper[4949]: I0120 15:11:33.629977 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:11:36 crc kubenswrapper[4949]: I0120 15:11:36.721667 4949 generic.go:334] "Generic (PLEG): container finished" podID="96f6253d-b990-4892-bd1f-9534caf70130" containerID="f219443796223b8979c4f2b7127d3bca2a123adb4cdd20183cbd06a84853e4d3" exitCode=0 Jan 20 15:11:36 crc kubenswrapper[4949]: I0120 15:11:36.721814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" event={"ID":"96f6253d-b990-4892-bd1f-9534caf70130","Type":"ContainerDied","Data":"f219443796223b8979c4f2b7127d3bca2a123adb4cdd20183cbd06a84853e4d3"} Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.123177 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.259131 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") pod \"96f6253d-b990-4892-bd1f-9534caf70130\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.259456 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") pod \"96f6253d-b990-4892-bd1f-9534caf70130\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.259574 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") pod \"96f6253d-b990-4892-bd1f-9534caf70130\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.259631 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") pod \"96f6253d-b990-4892-bd1f-9534caf70130\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.264196 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "96f6253d-b990-4892-bd1f-9534caf70130" (UID: "96f6253d-b990-4892-bd1f-9534caf70130"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.266803 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf" (OuterVolumeSpecName: "kube-api-access-kzfcf") pod "96f6253d-b990-4892-bd1f-9534caf70130" (UID: "96f6253d-b990-4892-bd1f-9534caf70130"). InnerVolumeSpecName "kube-api-access-kzfcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.286076 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96f6253d-b990-4892-bd1f-9534caf70130" (UID: "96f6253d-b990-4892-bd1f-9534caf70130"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.301818 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory" (OuterVolumeSpecName: "inventory") pod "96f6253d-b990-4892-bd1f-9534caf70130" (UID: "96f6253d-b990-4892-bd1f-9534caf70130"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.363301 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.363357 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.363379 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.363400 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.767441 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" event={"ID":"96f6253d-b990-4892-bd1f-9534caf70130","Type":"ContainerDied","Data":"c5990735c43e33ef1cd5b76d611b48d5bc2594cebc427562820c71779de689dd"} Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.767498 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5990735c43e33ef1cd5b76d611b48d5bc2594cebc427562820c71779de689dd" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.767628 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.854916 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:11:38 crc kubenswrapper[4949]: E0120 15:11:38.855510 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f6253d-b990-4892-bd1f-9534caf70130" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.855568 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f6253d-b990-4892-bd1f-9534caf70130" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.855875 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f6253d-b990-4892-bd1f-9534caf70130" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.856832 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.859823 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.859893 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.860296 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.860473 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.867931 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.980936 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.981007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.981070 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.981227 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.083303 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.083441 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.083486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.083551 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.090608 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.091068 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.094271 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.104639 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.180204 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: W0120 15:11:39.725731 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b69ef09_6dac_4ebb_b970_9c94553bea5a.slice/crio-47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b WatchSource:0}: Error finding container 47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b: Status 404 returned error can't find the container with id 47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.726715 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.776140 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" event={"ID":"3b69ef09-6dac-4ebb-b970-9c94553bea5a","Type":"ContainerStarted","Data":"47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b"} Jan 20 15:11:40 crc kubenswrapper[4949]: I0120 15:11:40.785705 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" event={"ID":"3b69ef09-6dac-4ebb-b970-9c94553bea5a","Type":"ContainerStarted","Data":"0aac50da813170e2d292b4794c74f28cf8e895ea7cadf5112dc53f78c6d69624"} Jan 20 15:11:40 crc kubenswrapper[4949]: I0120 15:11:40.810818 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" podStartSLOduration=2.267182357 podStartE2EDuration="2.810793696s" podCreationTimestamp="2026-01-20 15:11:38 +0000 UTC" firstStartedPulling="2026-01-20 15:11:39.731277944 +0000 UTC m=+1295.541108802" lastFinishedPulling="2026-01-20 15:11:40.274889233 +0000 UTC m=+1296.084720141" observedRunningTime="2026-01-20 15:11:40.800751056 +0000 UTC m=+1296.610581934" watchObservedRunningTime="2026-01-20 15:11:40.810793696 +0000 UTC m=+1296.620624564" Jan 20 15:11:42 crc kubenswrapper[4949]: I0120 15:11:42.588865 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 20 15:11:57 crc kubenswrapper[4949]: I0120 15:11:57.152406 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:11:57 crc kubenswrapper[4949]: I0120 15:11:57.152955 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:12:15 crc kubenswrapper[4949]: I0120 15:12:15.035759 4949 scope.go:117] "RemoveContainer" containerID="75eeb26a7f68d468851df9c835f2048a52f6a0810b6e958edad4bcb11c72b760" Jan 20 15:12:15 crc kubenswrapper[4949]: I0120 15:12:15.064059 4949 scope.go:117] "RemoveContainer" containerID="68276b2a29712da0c8b68150ac12b491bc8fd4c69ba0f9839e1490af457e18ac" Jan 20 15:12:15 crc kubenswrapper[4949]: I0120 15:12:15.153376 4949 scope.go:117] "RemoveContainer" containerID="e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746" Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.151945 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.152598 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.152652 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.153328 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.153391 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e" gracePeriod=600 Jan 20 15:12:28 crc kubenswrapper[4949]: I0120 15:12:28.287608 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e" exitCode=0 Jan 20 15:12:28 crc kubenswrapper[4949]: I0120 15:12:28.287720 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e"} Jan 20 15:12:28 crc kubenswrapper[4949]: I0120 15:12:28.288308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a"} Jan 20 15:12:28 crc kubenswrapper[4949]: I0120 15:12:28.288338 4949 scope.go:117] "RemoveContainer" containerID="bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.230325 4949 scope.go:117] "RemoveContainer" containerID="1c3e4aa1ea308f9c97aea7bb6cb6f532b81619e27a772434fe622f19cd656cfa" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.285403 4949 scope.go:117] "RemoveContainer" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.335310 4949 scope.go:117] "RemoveContainer" containerID="1bfde9055b8627100b5c93b232b289e018e33d5c7ac7bc51099c7c1742a2725c" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.368031 4949 scope.go:117] "RemoveContainer" containerID="a88c0c9a85129d9d6ee8562e849b80140bdaffa17c443b17a4de9fabf84ee113" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.412246 4949 scope.go:117] "RemoveContainer" containerID="7c2f2d8410be6184605e8ae8d978b47164b8a0cbb76d3f7c6288f8d1fc203aa8" Jan 20 15:14:27 crc kubenswrapper[4949]: I0120 15:14:27.151759 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:14:27 crc kubenswrapper[4949]: I0120 15:14:27.152304 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:14:53 crc kubenswrapper[4949]: I0120 15:14:53.168090 4949 generic.go:334] "Generic (PLEG): container finished" podID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" containerID="0aac50da813170e2d292b4794c74f28cf8e895ea7cadf5112dc53f78c6d69624" exitCode=0 Jan 20 15:14:53 crc kubenswrapper[4949]: I0120 15:14:53.168208 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" event={"ID":"3b69ef09-6dac-4ebb-b970-9c94553bea5a","Type":"ContainerDied","Data":"0aac50da813170e2d292b4794c74f28cf8e895ea7cadf5112dc53f78c6d69624"} Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.626781 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.781309 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") pod \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.781413 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") pod \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.781495 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") pod \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.781609 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") pod \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.787550 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3b69ef09-6dac-4ebb-b970-9c94553bea5a" (UID: "3b69ef09-6dac-4ebb-b970-9c94553bea5a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.788304 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q" (OuterVolumeSpecName: "kube-api-access-sqb8q") pod "3b69ef09-6dac-4ebb-b970-9c94553bea5a" (UID: "3b69ef09-6dac-4ebb-b970-9c94553bea5a"). InnerVolumeSpecName "kube-api-access-sqb8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.815275 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b69ef09-6dac-4ebb-b970-9c94553bea5a" (UID: "3b69ef09-6dac-4ebb-b970-9c94553bea5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.815408 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory" (OuterVolumeSpecName: "inventory") pod "3b69ef09-6dac-4ebb-b970-9c94553bea5a" (UID: "3b69ef09-6dac-4ebb-b970-9c94553bea5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.883691 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.883723 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.883732 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.883742 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") on node \"crc\" DevicePath \"\"" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.191738 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" event={"ID":"3b69ef09-6dac-4ebb-b970-9c94553bea5a","Type":"ContainerDied","Data":"47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b"} Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.191839 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.191869 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.314948 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:14:55 crc kubenswrapper[4949]: E0120 15:14:55.316408 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.316461 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.317799 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.318658 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.321362 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.321969 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.322419 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.326065 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.328674 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.501153 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.501969 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.502037 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.604379 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.604443 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.604580 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.610634 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.610655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.623525 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.644660 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:56 crc kubenswrapper[4949]: I0120 15:14:56.198599 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:14:57 crc kubenswrapper[4949]: I0120 15:14:57.154903 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:14:57 crc kubenswrapper[4949]: I0120 15:14:57.155419 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:14:57 crc kubenswrapper[4949]: I0120 15:14:57.211990 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" event={"ID":"f8d847d1-1215-4c1c-9741-fb2dcf39e42d","Type":"ContainerStarted","Data":"79e3fd11ead5b5d6100c8c9e2f04259848a510f2e84a78bb74dcea8b8590c187"} Jan 20 15:14:58 crc kubenswrapper[4949]: I0120 15:14:58.223201 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" event={"ID":"f8d847d1-1215-4c1c-9741-fb2dcf39e42d","Type":"ContainerStarted","Data":"e853b33d218fefaac9eaa8c42597b4fc7a0f0c58f70fdeb9cf7e2318c41718d3"} Jan 20 15:14:58 crc kubenswrapper[4949]: I0120 15:14:58.242398 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" podStartSLOduration=1.7207840810000001 podStartE2EDuration="3.242357395s" podCreationTimestamp="2026-01-20 15:14:55 +0000 UTC" firstStartedPulling="2026-01-20 15:14:56.20219408 +0000 UTC m=+1492.012024938" lastFinishedPulling="2026-01-20 15:14:57.723767394 +0000 UTC m=+1493.533598252" observedRunningTime="2026-01-20 15:14:58.241832649 +0000 UTC m=+1494.051663537" watchObservedRunningTime="2026-01-20 15:14:58.242357395 +0000 UTC m=+1494.052188253" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.151156 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.153955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.157729 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.158153 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.162530 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.297708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.297869 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.297927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.399801 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.399890 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.400133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.400817 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.409324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.418903 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.478441 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.970140 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 15:15:00 crc kubenswrapper[4949]: W0120 15:15:00.974983 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod574a1f73_b7b1_4ff1_9621_3c13ad507d66.slice/crio-e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387 WatchSource:0}: Error finding container e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387: Status 404 returned error can't find the container with id e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387 Jan 20 15:15:01 crc kubenswrapper[4949]: I0120 15:15:01.270975 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" event={"ID":"574a1f73-b7b1-4ff1-9621-3c13ad507d66","Type":"ContainerStarted","Data":"1e5c2f6206c81a356513a5962ceabe287f73be62df3cd8a2f36dfc56324aef5b"} Jan 20 15:15:01 crc kubenswrapper[4949]: I0120 15:15:01.271031 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" event={"ID":"574a1f73-b7b1-4ff1-9621-3c13ad507d66","Type":"ContainerStarted","Data":"e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387"} Jan 20 15:15:01 crc kubenswrapper[4949]: I0120 15:15:01.295615 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" podStartSLOduration=1.295596817 podStartE2EDuration="1.295596817s" podCreationTimestamp="2026-01-20 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:15:01.288960671 +0000 UTC m=+1497.098791529" watchObservedRunningTime="2026-01-20 15:15:01.295596817 +0000 UTC m=+1497.105427675" Jan 20 15:15:02 crc kubenswrapper[4949]: I0120 15:15:02.288220 4949 generic.go:334] "Generic (PLEG): container finished" podID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" containerID="1e5c2f6206c81a356513a5962ceabe287f73be62df3cd8a2f36dfc56324aef5b" exitCode=0 Jan 20 15:15:02 crc kubenswrapper[4949]: I0120 15:15:02.288332 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" event={"ID":"574a1f73-b7b1-4ff1-9621-3c13ad507d66","Type":"ContainerDied","Data":"1e5c2f6206c81a356513a5962ceabe287f73be62df3cd8a2f36dfc56324aef5b"} Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.645835 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.765899 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") pod \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.766111 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") pod \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.766153 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") pod \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.766827 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume" (OuterVolumeSpecName: "config-volume") pod "574a1f73-b7b1-4ff1-9621-3c13ad507d66" (UID: "574a1f73-b7b1-4ff1-9621-3c13ad507d66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.772605 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn" (OuterVolumeSpecName: "kube-api-access-sfkwn") pod "574a1f73-b7b1-4ff1-9621-3c13ad507d66" (UID: "574a1f73-b7b1-4ff1-9621-3c13ad507d66"). InnerVolumeSpecName "kube-api-access-sfkwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.773490 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "574a1f73-b7b1-4ff1-9621-3c13ad507d66" (UID: "574a1f73-b7b1-4ff1-9621-3c13ad507d66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.867782 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.867815 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.867825 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:04 crc kubenswrapper[4949]: I0120 15:15:04.308928 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" event={"ID":"574a1f73-b7b1-4ff1-9621-3c13ad507d66","Type":"ContainerDied","Data":"e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387"} Jan 20 15:15:04 crc kubenswrapper[4949]: I0120 15:15:04.308970 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387" Jan 20 15:15:04 crc kubenswrapper[4949]: I0120 15:15:04.309034 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:15 crc kubenswrapper[4949]: I0120 15:15:15.543751 4949 scope.go:117] "RemoveContainer" containerID="044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7" Jan 20 15:15:15 crc kubenswrapper[4949]: I0120 15:15:15.569694 4949 scope.go:117] "RemoveContainer" containerID="d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32" Jan 20 15:15:15 crc kubenswrapper[4949]: I0120 15:15:15.586941 4949 scope.go:117] "RemoveContainer" containerID="92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5" Jan 20 15:15:15 crc kubenswrapper[4949]: I0120 15:15:15.626019 4949 scope.go:117] "RemoveContainer" containerID="b9e7253362065575b97f2ce8215072002f755dd1b51aa51ada8298fea676a78f" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.430021 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:26 crc kubenswrapper[4949]: E0120 15:15:26.431157 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" containerName="collect-profiles" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.431175 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" containerName="collect-profiles" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.431374 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" containerName="collect-profiles" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.432914 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.452266 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.525200 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.525259 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.525311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627378 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627547 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627582 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627870 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627984 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.651478 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.755271 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.152790 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.153185 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.153233 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.153965 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.154049 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" gracePeriod=600 Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.283918 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:27 crc kubenswrapper[4949]: E0120 15:15:27.289066 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.536374 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" exitCode=0 Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.536564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a"} Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.536995 4949 scope.go:117] "RemoveContainer" containerID="1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.537987 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:15:27 crc kubenswrapper[4949]: E0120 15:15:27.538269 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.539887 4949 generic.go:334] "Generic (PLEG): container finished" podID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerID="b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c" exitCode=0 Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.539929 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerDied","Data":"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c"} Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.540050 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerStarted","Data":"ae40b37149370d760ec04644c7fafc2f2f8f38ac0934b5aee3c6cf9cc4197563"} Jan 20 15:15:29 crc kubenswrapper[4949]: I0120 15:15:29.565169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerStarted","Data":"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23"} Jan 20 15:15:30 crc kubenswrapper[4949]: I0120 15:15:30.579404 4949 generic.go:334] "Generic (PLEG): container finished" podID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerID="e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23" exitCode=0 Jan 20 15:15:30 crc kubenswrapper[4949]: I0120 15:15:30.579456 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerDied","Data":"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23"} Jan 20 15:15:33 crc kubenswrapper[4949]: I0120 15:15:33.606336 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerStarted","Data":"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98"} Jan 20 15:15:33 crc kubenswrapper[4949]: I0120 15:15:33.628195 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zgqwk" podStartSLOduration=2.334021055 podStartE2EDuration="7.628165597s" podCreationTimestamp="2026-01-20 15:15:26 +0000 UTC" firstStartedPulling="2026-01-20 15:15:27.542442308 +0000 UTC m=+1523.352273166" lastFinishedPulling="2026-01-20 15:15:32.83658684 +0000 UTC m=+1528.646417708" observedRunningTime="2026-01-20 15:15:33.624681459 +0000 UTC m=+1529.434512317" watchObservedRunningTime="2026-01-20 15:15:33.628165597 +0000 UTC m=+1529.437996505" Jan 20 15:15:36 crc kubenswrapper[4949]: I0120 15:15:36.755621 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:36 crc kubenswrapper[4949]: I0120 15:15:36.756434 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:36 crc kubenswrapper[4949]: I0120 15:15:36.822810 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:38 crc kubenswrapper[4949]: I0120 15:15:38.789248 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:15:38 crc kubenswrapper[4949]: E0120 15:15:38.789679 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:15:46 crc kubenswrapper[4949]: I0120 15:15:46.811244 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:46 crc kubenswrapper[4949]: I0120 15:15:46.875149 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:47 crc kubenswrapper[4949]: I0120 15:15:47.740723 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zgqwk" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="registry-server" containerID="cri-o://1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" gracePeriod=2 Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.205427 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.328056 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") pod \"8308fb38-8369-4477-8b02-8ac8f53247ac\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.328440 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") pod \"8308fb38-8369-4477-8b02-8ac8f53247ac\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.328545 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") pod \"8308fb38-8369-4477-8b02-8ac8f53247ac\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.331017 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities" (OuterVolumeSpecName: "utilities") pod "8308fb38-8369-4477-8b02-8ac8f53247ac" (UID: "8308fb38-8369-4477-8b02-8ac8f53247ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.350342 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89" (OuterVolumeSpecName: "kube-api-access-q9k89") pod "8308fb38-8369-4477-8b02-8ac8f53247ac" (UID: "8308fb38-8369-4477-8b02-8ac8f53247ac"). InnerVolumeSpecName "kube-api-access-q9k89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.399851 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8308fb38-8369-4477-8b02-8ac8f53247ac" (UID: "8308fb38-8369-4477-8b02-8ac8f53247ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.432125 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.432179 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.432201 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752110 4949 generic.go:334] "Generic (PLEG): container finished" podID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerID="1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" exitCode=0 Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752183 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerDied","Data":"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98"} Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752234 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerDied","Data":"ae40b37149370d760ec04644c7fafc2f2f8f38ac0934b5aee3c6cf9cc4197563"} Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752268 4949 scope.go:117] "RemoveContainer" containerID="1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.794126 4949 scope.go:117] "RemoveContainer" containerID="e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.801184 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.801218 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.814329 4949 scope.go:117] "RemoveContainer" containerID="b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.862239 4949 scope.go:117] "RemoveContainer" containerID="1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" Jan 20 15:15:48 crc kubenswrapper[4949]: E0120 15:15:48.862929 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98\": container with ID starting with 1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98 not found: ID does not exist" containerID="1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.863117 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98"} err="failed to get container status \"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98\": rpc error: code = NotFound desc = could not find container \"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98\": container with ID starting with 1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98 not found: ID does not exist" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.863239 4949 scope.go:117] "RemoveContainer" containerID="e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23" Jan 20 15:15:48 crc kubenswrapper[4949]: E0120 15:15:48.863820 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23\": container with ID starting with e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23 not found: ID does not exist" containerID="e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.863871 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23"} err="failed to get container status \"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23\": rpc error: code = NotFound desc = could not find container \"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23\": container with ID starting with e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23 not found: ID does not exist" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.863910 4949 scope.go:117] "RemoveContainer" containerID="b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c" Jan 20 15:15:48 crc kubenswrapper[4949]: E0120 15:15:48.864252 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c\": container with ID starting with b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c not found: ID does not exist" containerID="b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.864336 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c"} err="failed to get container status \"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c\": rpc error: code = NotFound desc = could not find container \"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c\": container with ID starting with b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c not found: ID does not exist" Jan 20 15:15:49 crc kubenswrapper[4949]: I0120 15:15:49.789544 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:15:49 crc kubenswrapper[4949]: E0120 15:15:49.790211 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:15:50 crc kubenswrapper[4949]: I0120 15:15:50.804264 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" path="/var/lib/kubelet/pods/8308fb38-8369-4477-8b02-8ac8f53247ac/volumes" Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.064349 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.082970 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.104575 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.113937 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.123575 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.132730 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.801238 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cffaea4-923f-446d-9df7-7c35332af89d" path="/var/lib/kubelet/pods/2cffaea4-923f-446d-9df7-7c35332af89d/volumes" Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.802284 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" path="/var/lib/kubelet/pods/81d427b9-3122-480c-8b2a-3862cdd2b3e2/volumes" Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.803009 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3acdd4-7817-4358-8afb-90399e3fa23f" path="/var/lib/kubelet/pods/fa3acdd4-7817-4358-8afb-90399e3fa23f/volumes" Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.033343 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.040027 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.046458 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.053810 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.064320 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.071167 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:16:04 crc kubenswrapper[4949]: I0120 15:16:04.794720 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:04 crc kubenswrapper[4949]: E0120 15:16:04.795953 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:16:04 crc kubenswrapper[4949]: I0120 15:16:04.799016 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f223041-d962-43d8-81ad-0480ed09ff57" path="/var/lib/kubelet/pods/5f223041-d962-43d8-81ad-0480ed09ff57/volumes" Jan 20 15:16:04 crc kubenswrapper[4949]: I0120 15:16:04.799580 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="625a0372-8b33-45fa-ad97-ad8e362be0fb" path="/var/lib/kubelet/pods/625a0372-8b33-45fa-ad97-ad8e362be0fb/volumes" Jan 20 15:16:04 crc kubenswrapper[4949]: I0120 15:16:04.800240 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2993cec-87be-40ef-8f45-51ad7072f115" path="/var/lib/kubelet/pods/e2993cec-87be-40ef-8f45-51ad7072f115/volumes" Jan 20 15:16:11 crc kubenswrapper[4949]: I0120 15:16:11.985102 4949 generic.go:334] "Generic (PLEG): container finished" podID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" containerID="e853b33d218fefaac9eaa8c42597b4fc7a0f0c58f70fdeb9cf7e2318c41718d3" exitCode=0 Jan 20 15:16:11 crc kubenswrapper[4949]: I0120 15:16:11.985222 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" event={"ID":"f8d847d1-1215-4c1c-9741-fb2dcf39e42d","Type":"ContainerDied","Data":"e853b33d218fefaac9eaa8c42597b4fc7a0f0c58f70fdeb9cf7e2318c41718d3"} Jan 20 15:16:12 crc kubenswrapper[4949]: I0120 15:16:12.032964 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:16:12 crc kubenswrapper[4949]: I0120 15:16:12.043668 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:16:12 crc kubenswrapper[4949]: I0120 15:16:12.800031 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cafb93d7-a006-4cd2-99bd-e21022a5078f" path="/var/lib/kubelet/pods/cafb93d7-a006-4cd2-99bd-e21022a5078f/volumes" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.444599 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.502253 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") pod \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.502371 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") pod \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.503202 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") pod \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.508819 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g" (OuterVolumeSpecName: "kube-api-access-hmx4g") pod "f8d847d1-1215-4c1c-9741-fb2dcf39e42d" (UID: "f8d847d1-1215-4c1c-9741-fb2dcf39e42d"). InnerVolumeSpecName "kube-api-access-hmx4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.541628 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8d847d1-1215-4c1c-9741-fb2dcf39e42d" (UID: "f8d847d1-1215-4c1c-9741-fb2dcf39e42d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.557711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory" (OuterVolumeSpecName: "inventory") pod "f8d847d1-1215-4c1c-9741-fb2dcf39e42d" (UID: "f8d847d1-1215-4c1c-9741-fb2dcf39e42d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.605804 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.605853 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.605872 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.011117 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" event={"ID":"f8d847d1-1215-4c1c-9741-fb2dcf39e42d","Type":"ContainerDied","Data":"79e3fd11ead5b5d6100c8c9e2f04259848a510f2e84a78bb74dcea8b8590c187"} Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.011172 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e3fd11ead5b5d6100c8c9e2f04259848a510f2e84a78bb74dcea8b8590c187" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.011244 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133033 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:16:14 crc kubenswrapper[4949]: E0120 15:16:14.133578 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="extract-content" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133607 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="extract-content" Jan 20 15:16:14 crc kubenswrapper[4949]: E0120 15:16:14.133658 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="registry-server" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133670 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="registry-server" Jan 20 15:16:14 crc kubenswrapper[4949]: E0120 15:16:14.133689 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133703 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:14 crc kubenswrapper[4949]: E0120 15:16:14.133745 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="extract-utilities" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133755 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="extract-utilities" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.134048 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="registry-server" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.134085 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.134966 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.136794 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.137763 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.140724 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.148431 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.149906 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.214416 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.214528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.214788 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.316550 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.316647 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.316750 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.321287 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.322488 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.338840 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.455000 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.024707 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.030863 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.692305 4949 scope.go:117] "RemoveContainer" containerID="29003c0194acb9afdeb9e8174b3f33c4656b98673fb67369661844d652a26c45" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.790774 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:15 crc kubenswrapper[4949]: E0120 15:16:15.791122 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.807746 4949 scope.go:117] "RemoveContainer" containerID="55c6563e40c843e59be4fafc63ead58bf30f2492a5f98973bbb68f0d2d05885c" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.838606 4949 scope.go:117] "RemoveContainer" containerID="094b6628ac46a9618593f47c854c4d7a9d9b69f90d2558abc891d2b0e99aaaf8" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.895809 4949 scope.go:117] "RemoveContainer" containerID="60cec251d342b33f2835307267fafe842a90ee5c67ed1111d71404e9b0f935b9" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.919055 4949 scope.go:117] "RemoveContainer" containerID="dffa1d52cc18bc6e9f06ff6a01edc5037b2d08f37abf9a308e6bc44c3c94e753" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.941193 4949 scope.go:117] "RemoveContainer" containerID="164c3dcadf95a92cfbbf8afe3651c7b6ec563c58436faac06ef963587b8a851b" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.963412 4949 scope.go:117] "RemoveContainer" containerID="182fc5d23cfc8772155fb0ae18fcbb7d700abd47011cd0c4eae8e341dd49f364" Jan 20 15:16:16 crc kubenswrapper[4949]: I0120 15:16:16.038618 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" event={"ID":"3af1d203-d1de-4e8b-95cb-7977a46b0042","Type":"ContainerStarted","Data":"97e58c67a0e98003cafb39480a06ac591ad9fe0f96f2a75ba8e22c54e01c1684"} Jan 20 15:16:16 crc kubenswrapper[4949]: I0120 15:16:16.038670 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" event={"ID":"3af1d203-d1de-4e8b-95cb-7977a46b0042","Type":"ContainerStarted","Data":"ca9096f49751c41d723589cdb971593c94a53895a459532c0d6cb2b394d39b3e"} Jan 20 15:16:21 crc kubenswrapper[4949]: I0120 15:16:21.093453 4949 generic.go:334] "Generic (PLEG): container finished" podID="3af1d203-d1de-4e8b-95cb-7977a46b0042" containerID="97e58c67a0e98003cafb39480a06ac591ad9fe0f96f2a75ba8e22c54e01c1684" exitCode=0 Jan 20 15:16:21 crc kubenswrapper[4949]: I0120 15:16:21.093564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" event={"ID":"3af1d203-d1de-4e8b-95cb-7977a46b0042","Type":"ContainerDied","Data":"97e58c67a0e98003cafb39480a06ac591ad9fe0f96f2a75ba8e22c54e01c1684"} Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.521825 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.563319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") pod \"3af1d203-d1de-4e8b-95cb-7977a46b0042\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.563764 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") pod \"3af1d203-d1de-4e8b-95cb-7977a46b0042\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.563844 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") pod \"3af1d203-d1de-4e8b-95cb-7977a46b0042\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.569530 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt" (OuterVolumeSpecName: "kube-api-access-7s4lt") pod "3af1d203-d1de-4e8b-95cb-7977a46b0042" (UID: "3af1d203-d1de-4e8b-95cb-7977a46b0042"). InnerVolumeSpecName "kube-api-access-7s4lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.598837 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3af1d203-d1de-4e8b-95cb-7977a46b0042" (UID: "3af1d203-d1de-4e8b-95cb-7977a46b0042"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.618084 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory" (OuterVolumeSpecName: "inventory") pod "3af1d203-d1de-4e8b-95cb-7977a46b0042" (UID: "3af1d203-d1de-4e8b-95cb-7977a46b0042"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.665681 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.665724 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.665736 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.111591 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" event={"ID":"3af1d203-d1de-4e8b-95cb-7977a46b0042","Type":"ContainerDied","Data":"ca9096f49751c41d723589cdb971593c94a53895a459532c0d6cb2b394d39b3e"} Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.111646 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca9096f49751c41d723589cdb971593c94a53895a459532c0d6cb2b394d39b3e" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.111670 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.198564 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:16:23 crc kubenswrapper[4949]: E0120 15:16:23.198943 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af1d203-d1de-4e8b-95cb-7977a46b0042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.198963 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af1d203-d1de-4e8b-95cb-7977a46b0042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.199151 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af1d203-d1de-4e8b-95cb-7977a46b0042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.199723 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.201665 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.202394 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.202505 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.202958 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.240920 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.275623 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.275713 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.275761 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.377465 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.377590 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.377651 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.381636 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.382317 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.394510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.520245 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:24 crc kubenswrapper[4949]: I0120 15:16:24.070000 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:16:24 crc kubenswrapper[4949]: I0120 15:16:24.121837 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" event={"ID":"9b62cf27-c244-466f-bddd-129a1a3db687","Type":"ContainerStarted","Data":"cdcffd9d7b2827b806fff56d34240ae3f953595cc37d090193b0f2cf2b2417fe"} Jan 20 15:16:25 crc kubenswrapper[4949]: I0120 15:16:25.132762 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" event={"ID":"9b62cf27-c244-466f-bddd-129a1a3db687","Type":"ContainerStarted","Data":"c94adfbb3bcf494a6ed833a1ec72b7c8690cb46c6a6ede8826f4415c70fb76b3"} Jan 20 15:16:25 crc kubenswrapper[4949]: I0120 15:16:25.154268 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" podStartSLOduration=1.5372355519999998 podStartE2EDuration="2.154246194s" podCreationTimestamp="2026-01-20 15:16:23 +0000 UTC" firstStartedPulling="2026-01-20 15:16:24.082248032 +0000 UTC m=+1579.892078910" lastFinishedPulling="2026-01-20 15:16:24.699258694 +0000 UTC m=+1580.509089552" observedRunningTime="2026-01-20 15:16:25.148212364 +0000 UTC m=+1580.958043222" watchObservedRunningTime="2026-01-20 15:16:25.154246194 +0000 UTC m=+1580.964077052" Jan 20 15:16:26 crc kubenswrapper[4949]: I0120 15:16:26.789785 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:26 crc kubenswrapper[4949]: E0120 15:16:26.790647 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.051659 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.066840 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.076879 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.086099 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.092979 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.099956 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.106510 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.112864 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.119610 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.127208 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.133902 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.140462 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.806183 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2114c9bc-9691-4d96-8541-28ec5473428a" path="/var/lib/kubelet/pods/2114c9bc-9691-4d96-8541-28ec5473428a/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.807503 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbefde7-e737-4f29-9093-afc47f438c4c" path="/var/lib/kubelet/pods/6cbefde7-e737-4f29-9093-afc47f438c4c/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.808771 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" path="/var/lib/kubelet/pods/900c89f3-a834-4a95-88cf-b6fda3fc9c58/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.809853 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" path="/var/lib/kubelet/pods/b57b3d7e-755f-43d2-aab3-f6d68a062a37/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.811918 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b36c38-4cb3-43d1-ade8-a1e554264870" path="/var/lib/kubelet/pods/c5b36c38-4cb3-43d1-ade8-a1e554264870/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.812927 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0efbc8-5060-4336-85af-23b901dd02fe" path="/var/lib/kubelet/pods/de0efbc8-5060-4336-85af-23b901dd02fe/volumes" Jan 20 15:16:35 crc kubenswrapper[4949]: I0120 15:16:35.022991 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:16:35 crc kubenswrapper[4949]: I0120 15:16:35.029397 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:16:36 crc kubenswrapper[4949]: I0120 15:16:36.802452 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" path="/var/lib/kubelet/pods/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e/volumes" Jan 20 15:16:38 crc kubenswrapper[4949]: I0120 15:16:38.790770 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:38 crc kubenswrapper[4949]: E0120 15:16:38.791558 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:16:44 crc kubenswrapper[4949]: I0120 15:16:44.029843 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:16:44 crc kubenswrapper[4949]: I0120 15:16:44.044700 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:16:44 crc kubenswrapper[4949]: I0120 15:16:44.807700 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" path="/var/lib/kubelet/pods/a8e8050e-32dc-4014-9bc7-cd06d127eb38/volumes" Jan 20 15:16:53 crc kubenswrapper[4949]: I0120 15:16:53.789145 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:53 crc kubenswrapper[4949]: E0120 15:16:53.789944 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.349064 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.352369 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.369327 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.506483 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.506580 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.506658 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.608477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.608626 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.608693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.609327 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.609481 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.630585 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.691053 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:03 crc kubenswrapper[4949]: I0120 15:17:03.225077 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:03 crc kubenswrapper[4949]: I0120 15:17:03.485796 4949 generic.go:334] "Generic (PLEG): container finished" podID="5fd12601-236e-4205-a994-2202832cf5a2" containerID="69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90" exitCode=0 Jan 20 15:17:03 crc kubenswrapper[4949]: I0120 15:17:03.486215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerDied","Data":"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90"} Jan 20 15:17:03 crc kubenswrapper[4949]: I0120 15:17:03.486266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerStarted","Data":"c5c43880117653d176e1498628fccde90a84869ad2ea1f0556fe05553c3d89b2"} Jan 20 15:17:04 crc kubenswrapper[4949]: I0120 15:17:04.799983 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:04 crc kubenswrapper[4949]: E0120 15:17:04.801126 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:05 crc kubenswrapper[4949]: I0120 15:17:05.508509 4949 generic.go:334] "Generic (PLEG): container finished" podID="5fd12601-236e-4205-a994-2202832cf5a2" containerID="85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885" exitCode=0 Jan 20 15:17:05 crc kubenswrapper[4949]: I0120 15:17:05.508576 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerDied","Data":"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885"} Jan 20 15:17:06 crc kubenswrapper[4949]: I0120 15:17:06.524155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerStarted","Data":"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4"} Jan 20 15:17:06 crc kubenswrapper[4949]: I0120 15:17:06.526602 4949 generic.go:334] "Generic (PLEG): container finished" podID="9b62cf27-c244-466f-bddd-129a1a3db687" containerID="c94adfbb3bcf494a6ed833a1ec72b7c8690cb46c6a6ede8826f4415c70fb76b3" exitCode=0 Jan 20 15:17:06 crc kubenswrapper[4949]: I0120 15:17:06.526662 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" event={"ID":"9b62cf27-c244-466f-bddd-129a1a3db687","Type":"ContainerDied","Data":"c94adfbb3bcf494a6ed833a1ec72b7c8690cb46c6a6ede8826f4415c70fb76b3"} Jan 20 15:17:06 crc kubenswrapper[4949]: I0120 15:17:06.547370 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwf8t" podStartSLOduration=1.747856418 podStartE2EDuration="4.547351463s" podCreationTimestamp="2026-01-20 15:17:02 +0000 UTC" firstStartedPulling="2026-01-20 15:17:03.487152302 +0000 UTC m=+1619.296983160" lastFinishedPulling="2026-01-20 15:17:06.286647337 +0000 UTC m=+1622.096478205" observedRunningTime="2026-01-20 15:17:06.544640226 +0000 UTC m=+1622.354471084" watchObservedRunningTime="2026-01-20 15:17:06.547351463 +0000 UTC m=+1622.357182321" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.412264 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.528985 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") pod \"9b62cf27-c244-466f-bddd-129a1a3db687\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.529066 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") pod \"9b62cf27-c244-466f-bddd-129a1a3db687\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.529145 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") pod \"9b62cf27-c244-466f-bddd-129a1a3db687\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.535142 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5" (OuterVolumeSpecName: "kube-api-access-f9mj5") pod "9b62cf27-c244-466f-bddd-129a1a3db687" (UID: "9b62cf27-c244-466f-bddd-129a1a3db687"). InnerVolumeSpecName "kube-api-access-f9mj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.543669 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" event={"ID":"9b62cf27-c244-466f-bddd-129a1a3db687","Type":"ContainerDied","Data":"cdcffd9d7b2827b806fff56d34240ae3f953595cc37d090193b0f2cf2b2417fe"} Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.543708 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcffd9d7b2827b806fff56d34240ae3f953595cc37d090193b0f2cf2b2417fe" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.543758 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.562186 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory" (OuterVolumeSpecName: "inventory") pod "9b62cf27-c244-466f-bddd-129a1a3db687" (UID: "9b62cf27-c244-466f-bddd-129a1a3db687"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.564480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b62cf27-c244-466f-bddd-129a1a3db687" (UID: "9b62cf27-c244-466f-bddd-129a1a3db687"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.634739 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.634788 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.634802 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.638337 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:17:08 crc kubenswrapper[4949]: E0120 15:17:08.638820 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b62cf27-c244-466f-bddd-129a1a3db687" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.638840 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b62cf27-c244-466f-bddd-129a1a3db687" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.639074 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b62cf27-c244-466f-bddd-129a1a3db687" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.640712 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.650279 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.736265 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.736435 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.736671 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.838554 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.838661 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.838804 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.842369 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.843929 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.856721 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.968714 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:09 crc kubenswrapper[4949]: I0120 15:17:09.344778 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:17:09 crc kubenswrapper[4949]: W0120 15:17:09.352833 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod949e48ac_89ca_4f38_886e_fd951c7d7217.slice/crio-873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9 WatchSource:0}: Error finding container 873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9: Status 404 returned error can't find the container with id 873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9 Jan 20 15:17:09 crc kubenswrapper[4949]: I0120 15:17:09.551207 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" event={"ID":"949e48ac-89ca-4f38-886e-fd951c7d7217","Type":"ContainerStarted","Data":"873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9"} Jan 20 15:17:10 crc kubenswrapper[4949]: I0120 15:17:10.565667 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" event={"ID":"949e48ac-89ca-4f38-886e-fd951c7d7217","Type":"ContainerStarted","Data":"1d63423a01df08b03bee2e370a0b400c09d1d1f14c81f27c3919b0933a8309a6"} Jan 20 15:17:10 crc kubenswrapper[4949]: I0120 15:17:10.593418 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" podStartSLOduration=2.063735413 podStartE2EDuration="2.593399932s" podCreationTimestamp="2026-01-20 15:17:08 +0000 UTC" firstStartedPulling="2026-01-20 15:17:09.355954354 +0000 UTC m=+1625.165785212" lastFinishedPulling="2026-01-20 15:17:09.885618843 +0000 UTC m=+1625.695449731" observedRunningTime="2026-01-20 15:17:10.586268065 +0000 UTC m=+1626.396098923" watchObservedRunningTime="2026-01-20 15:17:10.593399932 +0000 UTC m=+1626.403230790" Jan 20 15:17:12 crc kubenswrapper[4949]: I0120 15:17:12.692190 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:12 crc kubenswrapper[4949]: I0120 15:17:12.692820 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:12 crc kubenswrapper[4949]: I0120 15:17:12.737656 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:13 crc kubenswrapper[4949]: I0120 15:17:13.699174 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:13 crc kubenswrapper[4949]: I0120 15:17:13.771905 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:14 crc kubenswrapper[4949]: I0120 15:17:14.617898 4949 generic.go:334] "Generic (PLEG): container finished" podID="949e48ac-89ca-4f38-886e-fd951c7d7217" containerID="1d63423a01df08b03bee2e370a0b400c09d1d1f14c81f27c3919b0933a8309a6" exitCode=0 Jan 20 15:17:14 crc kubenswrapper[4949]: I0120 15:17:14.617981 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" event={"ID":"949e48ac-89ca-4f38-886e-fd951c7d7217","Type":"ContainerDied","Data":"1d63423a01df08b03bee2e370a0b400c09d1d1f14c81f27c3919b0933a8309a6"} Jan 20 15:17:15 crc kubenswrapper[4949]: I0120 15:17:15.045326 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:17:15 crc kubenswrapper[4949]: I0120 15:17:15.055501 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:17:15 crc kubenswrapper[4949]: I0120 15:17:15.624493 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwf8t" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="registry-server" containerID="cri-o://acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" gracePeriod=2 Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.024125 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.113502 4949 scope.go:117] "RemoveContainer" containerID="db80a4f0bdc48f37dc22bc58775d3f05dd7c013f54339b1f5661562fd9df7daa" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.147824 4949 scope.go:117] "RemoveContainer" containerID="a781bfdfd8762ae5e24e9222dfc90fa11c886930c4dbb418962538438aae1ac6" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.176575 4949 scope.go:117] "RemoveContainer" containerID="1b29787e73d44fce82b44b4dc092f944512be0b9918fd3a1f7b95398ec00eb0f" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.182040 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") pod \"949e48ac-89ca-4f38-886e-fd951c7d7217\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.182164 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") pod \"949e48ac-89ca-4f38-886e-fd951c7d7217\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.182266 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") pod \"949e48ac-89ca-4f38-886e-fd951c7d7217\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.187949 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv" (OuterVolumeSpecName: "kube-api-access-fnpdv") pod "949e48ac-89ca-4f38-886e-fd951c7d7217" (UID: "949e48ac-89ca-4f38-886e-fd951c7d7217"). InnerVolumeSpecName "kube-api-access-fnpdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.211542 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "949e48ac-89ca-4f38-886e-fd951c7d7217" (UID: "949e48ac-89ca-4f38-886e-fd951c7d7217"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.212150 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory" (OuterVolumeSpecName: "inventory") pod "949e48ac-89ca-4f38-886e-fd951c7d7217" (UID: "949e48ac-89ca-4f38-886e-fd951c7d7217"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.248125 4949 scope.go:117] "RemoveContainer" containerID="3317c3d6f4446853e9f40dfeb54dd548432b68af8205e468da2990c7a1c463c4" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.285572 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.285614 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.285627 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.292655 4949 scope.go:117] "RemoveContainer" containerID="c597795c21e284cf8447b4c1ba489d0c9f85fbd9dd3ef4fe3d4ba5bb6bd98cfb" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.313030 4949 scope.go:117] "RemoveContainer" containerID="e4c82d229c717e5c0ffde6b9f00c036b0384157d1d756dd1f0e6b2ffaf868b06" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.339062 4949 scope.go:117] "RemoveContainer" containerID="16aca3788ba46fca2c3a4e2db01394682bdf190975c465ad5615866366e0a008" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.369316 4949 scope.go:117] "RemoveContainer" containerID="d629aa6c999c4680b1c85169158551de91f7a34a4f27afe1607eb228257fc70c" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.486177 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.591014 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") pod \"5fd12601-236e-4205-a994-2202832cf5a2\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.591275 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") pod \"5fd12601-236e-4205-a994-2202832cf5a2\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.591329 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") pod \"5fd12601-236e-4205-a994-2202832cf5a2\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.591905 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities" (OuterVolumeSpecName: "utilities") pod "5fd12601-236e-4205-a994-2202832cf5a2" (UID: "5fd12601-236e-4205-a994-2202832cf5a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.596568 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5" (OuterVolumeSpecName: "kube-api-access-l7tq5") pod "5fd12601-236e-4205-a994-2202832cf5a2" (UID: "5fd12601-236e-4205-a994-2202832cf5a2"). InnerVolumeSpecName "kube-api-access-l7tq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635320 4949 generic.go:334] "Generic (PLEG): container finished" podID="5fd12601-236e-4205-a994-2202832cf5a2" containerID="acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" exitCode=0 Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635405 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635375 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerDied","Data":"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4"} Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerDied","Data":"c5c43880117653d176e1498628fccde90a84869ad2ea1f0556fe05553c3d89b2"} Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635606 4949 scope.go:117] "RemoveContainer" containerID="acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.637722 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" event={"ID":"949e48ac-89ca-4f38-886e-fd951c7d7217","Type":"ContainerDied","Data":"873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9"} Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.637939 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.637756 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.644191 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fd12601-236e-4205-a994-2202832cf5a2" (UID: "5fd12601-236e-4205-a994-2202832cf5a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.660423 4949 scope.go:117] "RemoveContainer" containerID="85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.694031 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.694075 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.694088 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.701829 4949 scope.go:117] "RemoveContainer" containerID="69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.709920 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.710269 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="extract-utilities" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710285 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="extract-utilities" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.710307 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="registry-server" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710314 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="registry-server" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.710330 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949e48ac-89ca-4f38-886e-fd951c7d7217" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710337 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="949e48ac-89ca-4f38-886e-fd951c7d7217" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.710350 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="extract-content" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710356 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="extract-content" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710582 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="registry-server" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710605 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="949e48ac-89ca-4f38-886e-fd951c7d7217" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.711151 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.716592 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.716604 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.716834 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.717407 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.719130 4949 scope.go:117] "RemoveContainer" containerID="acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.719870 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4\": container with ID starting with acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4 not found: ID does not exist" containerID="acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.719915 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4"} err="failed to get container status \"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4\": rpc error: code = NotFound desc = could not find container \"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4\": container with ID starting with acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4 not found: ID does not exist" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.719944 4949 scope.go:117] "RemoveContainer" containerID="85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.720192 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885\": container with ID starting with 85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885 not found: ID does not exist" containerID="85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.720220 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885"} err="failed to get container status \"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885\": rpc error: code = NotFound desc = could not find container \"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885\": container with ID starting with 85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885 not found: ID does not exist" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.720242 4949 scope.go:117] "RemoveContainer" containerID="69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.720434 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90\": container with ID starting with 69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90 not found: ID does not exist" containerID="69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.720459 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90"} err="failed to get container status \"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90\": rpc error: code = NotFound desc = could not find container \"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90\": container with ID starting with 69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90 not found: ID does not exist" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.722582 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.796306 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.796822 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.796899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.808954 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" path="/var/lib/kubelet/pods/40994e0d-d911-4b6a-9ae9-96fbc4be8a36/volumes" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.899587 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.899994 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.900155 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.904934 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.905185 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.924232 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.001759 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.008175 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.028920 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.537784 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.646258 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" event={"ID":"744449f9-40c5-4c12-944e-f9ff875daf40","Type":"ContainerStarted","Data":"dc68b704b8f61a3410ea0c36184e65f1a973cb59056bd6ee16a253749318809d"} Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.789465 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:17 crc kubenswrapper[4949]: E0120 15:17:17.789794 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:18 crc kubenswrapper[4949]: I0120 15:17:18.802888 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd12601-236e-4205-a994-2202832cf5a2" path="/var/lib/kubelet/pods/5fd12601-236e-4205-a994-2202832cf5a2/volumes" Jan 20 15:17:19 crc kubenswrapper[4949]: I0120 15:17:19.671410 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" event={"ID":"744449f9-40c5-4c12-944e-f9ff875daf40","Type":"ContainerStarted","Data":"61d6a1c0b85df33d5d78057047d5341fbd91345414dd9c25ddd0c69c028e9f1c"} Jan 20 15:17:19 crc kubenswrapper[4949]: I0120 15:17:19.699729 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" podStartSLOduration=2.117989741 podStartE2EDuration="3.699706492s" podCreationTimestamp="2026-01-20 15:17:16 +0000 UTC" firstStartedPulling="2026-01-20 15:17:17.536823992 +0000 UTC m=+1633.346654850" lastFinishedPulling="2026-01-20 15:17:19.118540713 +0000 UTC m=+1634.928371601" observedRunningTime="2026-01-20 15:17:19.692111503 +0000 UTC m=+1635.501942371" watchObservedRunningTime="2026-01-20 15:17:19.699706492 +0000 UTC m=+1635.509537380" Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.887059 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.892013 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.910350 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.929053 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.929116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.929146 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.030775 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.030959 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.031007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.031452 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.031586 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.053231 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.208737 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: W0120 15:17:21.688178 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f10dbf2_f865_4fd3_b475_e34fd1a18aa4.slice/crio-a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707 WatchSource:0}: Error finding container a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707: Status 404 returned error can't find the container with id a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707 Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.694042 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:22 crc kubenswrapper[4949]: I0120 15:17:22.697409 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerID="a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5" exitCode=0 Jan 20 15:17:22 crc kubenswrapper[4949]: I0120 15:17:22.697464 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerDied","Data":"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5"} Jan 20 15:17:22 crc kubenswrapper[4949]: I0120 15:17:22.697948 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerStarted","Data":"a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707"} Jan 20 15:17:23 crc kubenswrapper[4949]: I0120 15:17:23.707710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerStarted","Data":"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c"} Jan 20 15:17:25 crc kubenswrapper[4949]: I0120 15:17:25.724448 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerID="08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c" exitCode=0 Jan 20 15:17:25 crc kubenswrapper[4949]: I0120 15:17:25.724493 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerDied","Data":"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c"} Jan 20 15:17:26 crc kubenswrapper[4949]: I0120 15:17:26.029917 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:17:26 crc kubenswrapper[4949]: I0120 15:17:26.040472 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:17:26 crc kubenswrapper[4949]: I0120 15:17:26.800365 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" path="/var/lib/kubelet/pods/1f96f008-7e3c-4512-bddd-51e42a0c7ce2/volumes" Jan 20 15:17:27 crc kubenswrapper[4949]: I0120 15:17:27.031482 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:17:27 crc kubenswrapper[4949]: I0120 15:17:27.043989 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:17:28 crc kubenswrapper[4949]: I0120 15:17:28.759222 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerStarted","Data":"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6"} Jan 20 15:17:28 crc kubenswrapper[4949]: I0120 15:17:28.800921 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b5f79a-1adc-4ec3-a257-ce37600d2357" path="/var/lib/kubelet/pods/26b5f79a-1adc-4ec3-a257-ce37600d2357/volumes" Jan 20 15:17:29 crc kubenswrapper[4949]: I0120 15:17:29.796924 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sc76c" podStartSLOduration=4.359775904 podStartE2EDuration="9.796903828s" podCreationTimestamp="2026-01-20 15:17:20 +0000 UTC" firstStartedPulling="2026-01-20 15:17:22.700419023 +0000 UTC m=+1638.510249891" lastFinishedPulling="2026-01-20 15:17:28.137546957 +0000 UTC m=+1643.947377815" observedRunningTime="2026-01-20 15:17:29.790757095 +0000 UTC m=+1645.600587973" watchObservedRunningTime="2026-01-20 15:17:29.796903828 +0000 UTC m=+1645.606734706" Jan 20 15:17:30 crc kubenswrapper[4949]: I0120 15:17:30.790969 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:30 crc kubenswrapper[4949]: E0120 15:17:30.791500 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:31 crc kubenswrapper[4949]: I0120 15:17:31.209194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:31 crc kubenswrapper[4949]: I0120 15:17:31.209868 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:32 crc kubenswrapper[4949]: I0120 15:17:32.274626 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sc76c" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" probeResult="failure" output=< Jan 20 15:17:32 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:17:32 crc kubenswrapper[4949]: > Jan 20 15:17:33 crc kubenswrapper[4949]: I0120 15:17:33.046410 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:17:33 crc kubenswrapper[4949]: I0120 15:17:33.056238 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:17:34 crc kubenswrapper[4949]: I0120 15:17:34.801291 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f476712d-366a-4948-b282-66660a6d81c4" path="/var/lib/kubelet/pods/f476712d-366a-4948-b282-66660a6d81c4/volumes" Jan 20 15:17:41 crc kubenswrapper[4949]: I0120 15:17:41.270339 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:41 crc kubenswrapper[4949]: I0120 15:17:41.324293 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:41 crc kubenswrapper[4949]: I0120 15:17:41.516690 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:42 crc kubenswrapper[4949]: I0120 15:17:42.788753 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:42 crc kubenswrapper[4949]: E0120 15:17:42.789354 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:42 crc kubenswrapper[4949]: I0120 15:17:42.886532 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sc76c" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" containerID="cri-o://2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" gracePeriod=2 Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.304564 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.340996 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") pod \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.341196 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") pod \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.341244 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") pod \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.344297 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities" (OuterVolumeSpecName: "utilities") pod "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" (UID: "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.348485 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j" (OuterVolumeSpecName: "kube-api-access-pww4j") pod "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" (UID: "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4"). InnerVolumeSpecName "kube-api-access-pww4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.443147 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.443188 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.478880 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" (UID: "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.545032 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.897790 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerID="2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" exitCode=0 Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.897850 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerDied","Data":"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6"} Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.897884 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerDied","Data":"a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707"} Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.897904 4949 scope.go:117] "RemoveContainer" containerID="2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.898096 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.926098 4949 scope.go:117] "RemoveContainer" containerID="08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.940934 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.948859 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.955437 4949 scope.go:117] "RemoveContainer" containerID="a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.992621 4949 scope.go:117] "RemoveContainer" containerID="2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" Jan 20 15:17:43 crc kubenswrapper[4949]: E0120 15:17:43.993240 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6\": container with ID starting with 2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6 not found: ID does not exist" containerID="2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.993387 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6"} err="failed to get container status \"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6\": rpc error: code = NotFound desc = could not find container \"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6\": container with ID starting with 2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6 not found: ID does not exist" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.993506 4949 scope.go:117] "RemoveContainer" containerID="08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c" Jan 20 15:17:43 crc kubenswrapper[4949]: E0120 15:17:43.994066 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c\": container with ID starting with 08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c not found: ID does not exist" containerID="08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.994100 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c"} err="failed to get container status \"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c\": rpc error: code = NotFound desc = could not find container \"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c\": container with ID starting with 08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c not found: ID does not exist" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.994141 4949 scope.go:117] "RemoveContainer" containerID="a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5" Jan 20 15:17:43 crc kubenswrapper[4949]: E0120 15:17:43.994396 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5\": container with ID starting with a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5 not found: ID does not exist" containerID="a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.994428 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5"} err="failed to get container status \"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5\": rpc error: code = NotFound desc = could not find container \"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5\": container with ID starting with a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5 not found: ID does not exist" Jan 20 15:17:44 crc kubenswrapper[4949]: I0120 15:17:44.037608 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:17:44 crc kubenswrapper[4949]: I0120 15:17:44.050307 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:17:44 crc kubenswrapper[4949]: I0120 15:17:44.804406 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" path="/var/lib/kubelet/pods/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4/volumes" Jan 20 15:17:44 crc kubenswrapper[4949]: I0120 15:17:44.806033 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" path="/var/lib/kubelet/pods/c18369cb-0b5b-40f7-bc73-af04fb510f31/volumes" Jan 20 15:17:53 crc kubenswrapper[4949]: I0120 15:17:53.789618 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:53 crc kubenswrapper[4949]: E0120 15:17:53.791116 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:07 crc kubenswrapper[4949]: I0120 15:18:07.789817 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:07 crc kubenswrapper[4949]: E0120 15:18:07.791385 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:15 crc kubenswrapper[4949]: I0120 15:18:15.191370 4949 generic.go:334] "Generic (PLEG): container finished" podID="744449f9-40c5-4c12-944e-f9ff875daf40" containerID="61d6a1c0b85df33d5d78057047d5341fbd91345414dd9c25ddd0c69c028e9f1c" exitCode=0 Jan 20 15:18:15 crc kubenswrapper[4949]: I0120 15:18:15.191448 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" event={"ID":"744449f9-40c5-4c12-944e-f9ff875daf40","Type":"ContainerDied","Data":"61d6a1c0b85df33d5d78057047d5341fbd91345414dd9c25ddd0c69c028e9f1c"} Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.557256 4949 scope.go:117] "RemoveContainer" containerID="f8b3cfedae50e77bf3dc2206f556c9d7bad02daab241ca1f62eeff8bbb5e7df7" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.642291 4949 scope.go:117] "RemoveContainer" containerID="8cb523447a664ee7d1c2eb08354a595f8dd6a512b238d12f561592cd541bb7a7" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.662538 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.687271 4949 scope.go:117] "RemoveContainer" containerID="32ab9bcaba594aad212f54775fb1f42c09b044512f762d52c287bc1ce60443b2" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.733486 4949 scope.go:117] "RemoveContainer" containerID="aed7fe52bc151294271b4f9cd142d75f94b93f932573c90067784cdc82a30aad" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.772068 4949 scope.go:117] "RemoveContainer" containerID="57c84a2f332d6d3e1141d495167c3115a7ad4da207ef63deed652fdc8cda50e5" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.799145 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") pod \"744449f9-40c5-4c12-944e-f9ff875daf40\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.799273 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") pod \"744449f9-40c5-4c12-944e-f9ff875daf40\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.799320 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") pod \"744449f9-40c5-4c12-944e-f9ff875daf40\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.806830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj" (OuterVolumeSpecName: "kube-api-access-dqqpj") pod "744449f9-40c5-4c12-944e-f9ff875daf40" (UID: "744449f9-40c5-4c12-944e-f9ff875daf40"). InnerVolumeSpecName "kube-api-access-dqqpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.825321 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory" (OuterVolumeSpecName: "inventory") pod "744449f9-40c5-4c12-944e-f9ff875daf40" (UID: "744449f9-40c5-4c12-944e-f9ff875daf40"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.826490 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "744449f9-40c5-4c12-944e-f9ff875daf40" (UID: "744449f9-40c5-4c12-944e-f9ff875daf40"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.901348 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.901374 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.901386 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.218476 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" event={"ID":"744449f9-40c5-4c12-944e-f9ff875daf40","Type":"ContainerDied","Data":"dc68b704b8f61a3410ea0c36184e65f1a973cb59056bd6ee16a253749318809d"} Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.218908 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc68b704b8f61a3410ea0c36184e65f1a973cb59056bd6ee16a253749318809d" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.218861 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.311755 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:18:17 crc kubenswrapper[4949]: E0120 15:18:17.312115 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744449f9-40c5-4c12-944e-f9ff875daf40" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312132 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="744449f9-40c5-4c12-944e-f9ff875daf40" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:17 crc kubenswrapper[4949]: E0120 15:18:17.312148 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="extract-utilities" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312154 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="extract-utilities" Jan 20 15:18:17 crc kubenswrapper[4949]: E0120 15:18:17.312178 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="extract-content" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312184 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="extract-content" Jan 20 15:18:17 crc kubenswrapper[4949]: E0120 15:18:17.312194 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312199 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312349 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312365 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="744449f9-40c5-4c12-944e-f9ff875daf40" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.313053 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.321456 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.321755 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.321949 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.322462 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.337131 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.408650 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.408783 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.408831 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.510235 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.510367 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.510421 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.514138 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.514850 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.529675 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.630315 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:18 crc kubenswrapper[4949]: I0120 15:18:18.200252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:18:18 crc kubenswrapper[4949]: I0120 15:18:18.226449 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-97647" event={"ID":"ba3f2ff6-def1-41aa-8918-32399eb1a55b","Type":"ContainerStarted","Data":"5b9fae0c5c441902225f4e9ac8b64a3899150e077640dbb8ba4dda6e2793c7ed"} Jan 20 15:18:18 crc kubenswrapper[4949]: I0120 15:18:18.789472 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:18 crc kubenswrapper[4949]: E0120 15:18:18.790436 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:19 crc kubenswrapper[4949]: I0120 15:18:19.249856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-97647" event={"ID":"ba3f2ff6-def1-41aa-8918-32399eb1a55b","Type":"ContainerStarted","Data":"34d9fdce66eba37a0cde8429948ec3cf10ca4344d01055582003057c7611eca1"} Jan 20 15:18:19 crc kubenswrapper[4949]: I0120 15:18:19.268005 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-97647" podStartSLOduration=1.764240357 podStartE2EDuration="2.267986595s" podCreationTimestamp="2026-01-20 15:18:17 +0000 UTC" firstStartedPulling="2026-01-20 15:18:18.205463806 +0000 UTC m=+1694.015294664" lastFinishedPulling="2026-01-20 15:18:18.709210044 +0000 UTC m=+1694.519040902" observedRunningTime="2026-01-20 15:18:19.262635948 +0000 UTC m=+1695.072466806" watchObservedRunningTime="2026-01-20 15:18:19.267986595 +0000 UTC m=+1695.077817453" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.069280 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.078819 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.087078 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.095216 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.107403 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.114017 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.122055 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.128226 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.134466 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.142379 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.150545 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.158809 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.800934 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170f8463-ece8-42b9-944f-b4adcc22e897" path="/var/lib/kubelet/pods/170f8463-ece8-42b9-944f-b4adcc22e897/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.801601 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" path="/var/lib/kubelet/pods/3187f0f3-7689-4faf-92cc-8d869ef8ecd9/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.802171 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6572b1b9-85e4-4ede-879f-754c173433d1" path="/var/lib/kubelet/pods/6572b1b9-85e4-4ede-879f-754c173433d1/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.802798 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" path="/var/lib/kubelet/pods/91c4f23f-5c92-4f03-a457-6fe5ddc27eec/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.803973 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956eb935-630a-49f6-8b3e-e5053edea66b" path="/var/lib/kubelet/pods/956eb935-630a-49f6-8b3e-e5053edea66b/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.804583 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" path="/var/lib/kubelet/pods/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c/volumes" Jan 20 15:18:27 crc kubenswrapper[4949]: I0120 15:18:27.328789 4949 generic.go:334] "Generic (PLEG): container finished" podID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" containerID="34d9fdce66eba37a0cde8429948ec3cf10ca4344d01055582003057c7611eca1" exitCode=0 Jan 20 15:18:27 crc kubenswrapper[4949]: I0120 15:18:27.329050 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-97647" event={"ID":"ba3f2ff6-def1-41aa-8918-32399eb1a55b","Type":"ContainerDied","Data":"34d9fdce66eba37a0cde8429948ec3cf10ca4344d01055582003057c7611eca1"} Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.822882 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.971322 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") pod \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.971384 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") pod \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.971534 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") pod \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.977427 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k" (OuterVolumeSpecName: "kube-api-access-vr62k") pod "ba3f2ff6-def1-41aa-8918-32399eb1a55b" (UID: "ba3f2ff6-def1-41aa-8918-32399eb1a55b"). InnerVolumeSpecName "kube-api-access-vr62k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.995690 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ba3f2ff6-def1-41aa-8918-32399eb1a55b" (UID: "ba3f2ff6-def1-41aa-8918-32399eb1a55b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.002684 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ba3f2ff6-def1-41aa-8918-32399eb1a55b" (UID: "ba3f2ff6-def1-41aa-8918-32399eb1a55b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.073543 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.073585 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.073598 4949 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.345790 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-97647" event={"ID":"ba3f2ff6-def1-41aa-8918-32399eb1a55b","Type":"ContainerDied","Data":"5b9fae0c5c441902225f4e9ac8b64a3899150e077640dbb8ba4dda6e2793c7ed"} Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.345820 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.345840 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9fae0c5c441902225f4e9ac8b64a3899150e077640dbb8ba4dda6e2793c7ed" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.416902 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:18:29 crc kubenswrapper[4949]: E0120 15:18:29.417342 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.417364 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.417595 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.418359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.421275 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.421397 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.421539 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.422539 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.437228 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.481453 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.481586 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.481803 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.582878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.582956 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.583030 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.594787 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.596453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.602363 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.733001 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:30 crc kubenswrapper[4949]: I0120 15:18:30.243234 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:18:30 crc kubenswrapper[4949]: I0120 15:18:30.354302 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" event={"ID":"ab73db4b-4663-4234-be64-866efa186f5a","Type":"ContainerStarted","Data":"f2367ef4f1c65962ade48b16cde42460e0cc46b4c9249a9e08f94884bddb73fe"} Jan 20 15:18:33 crc kubenswrapper[4949]: I0120 15:18:33.407673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" event={"ID":"ab73db4b-4663-4234-be64-866efa186f5a","Type":"ContainerStarted","Data":"56ac5c2088c598e4fcaa7786fdcab5e98d541fa419bbd7c17fa0ae992434ce82"} Jan 20 15:18:33 crc kubenswrapper[4949]: I0120 15:18:33.434609 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" podStartSLOduration=2.269525947 podStartE2EDuration="4.434587859s" podCreationTimestamp="2026-01-20 15:18:29 +0000 UTC" firstStartedPulling="2026-01-20 15:18:30.253357622 +0000 UTC m=+1706.063188480" lastFinishedPulling="2026-01-20 15:18:32.418419534 +0000 UTC m=+1708.228250392" observedRunningTime="2026-01-20 15:18:33.429297094 +0000 UTC m=+1709.239127952" watchObservedRunningTime="2026-01-20 15:18:33.434587859 +0000 UTC m=+1709.244418717" Jan 20 15:18:33 crc kubenswrapper[4949]: I0120 15:18:33.789475 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:33 crc kubenswrapper[4949]: E0120 15:18:33.789744 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:41 crc kubenswrapper[4949]: I0120 15:18:41.469485 4949 generic.go:334] "Generic (PLEG): container finished" podID="ab73db4b-4663-4234-be64-866efa186f5a" containerID="56ac5c2088c598e4fcaa7786fdcab5e98d541fa419bbd7c17fa0ae992434ce82" exitCode=0 Jan 20 15:18:41 crc kubenswrapper[4949]: I0120 15:18:41.469579 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" event={"ID":"ab73db4b-4663-4234-be64-866efa186f5a","Type":"ContainerDied","Data":"56ac5c2088c598e4fcaa7786fdcab5e98d541fa419bbd7c17fa0ae992434ce82"} Jan 20 15:18:42 crc kubenswrapper[4949]: I0120 15:18:42.918579 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.031172 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") pod \"ab73db4b-4663-4234-be64-866efa186f5a\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.031234 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") pod \"ab73db4b-4663-4234-be64-866efa186f5a\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.031267 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") pod \"ab73db4b-4663-4234-be64-866efa186f5a\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.036792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6" (OuterVolumeSpecName: "kube-api-access-jdkm6") pod "ab73db4b-4663-4234-be64-866efa186f5a" (UID: "ab73db4b-4663-4234-be64-866efa186f5a"). InnerVolumeSpecName "kube-api-access-jdkm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.056164 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab73db4b-4663-4234-be64-866efa186f5a" (UID: "ab73db4b-4663-4234-be64-866efa186f5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.056706 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory" (OuterVolumeSpecName: "inventory") pod "ab73db4b-4663-4234-be64-866efa186f5a" (UID: "ab73db4b-4663-4234-be64-866efa186f5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.133016 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.133061 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.133078 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.497784 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" event={"ID":"ab73db4b-4663-4234-be64-866efa186f5a","Type":"ContainerDied","Data":"f2367ef4f1c65962ade48b16cde42460e0cc46b4c9249a9e08f94884bddb73fe"} Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.498019 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2367ef4f1c65962ade48b16cde42460e0cc46b4c9249a9e08f94884bddb73fe" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.497939 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.609128 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:18:43 crc kubenswrapper[4949]: E0120 15:18:43.609882 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab73db4b-4663-4234-be64-866efa186f5a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.609918 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab73db4b-4663-4234-be64-866efa186f5a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.610342 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab73db4b-4663-4234-be64-866efa186f5a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.611709 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.615868 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.615996 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.617212 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.622617 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.623114 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.742857 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.742999 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.743109 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.845272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.845477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.845577 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.850281 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.851823 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.863473 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.940934 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:44 crc kubenswrapper[4949]: I0120 15:18:44.534476 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:18:44 crc kubenswrapper[4949]: W0120 15:18:44.554337 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd10292b4_ea8f_4236_8d89_3b97f21a04cb.slice/crio-dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d WatchSource:0}: Error finding container dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d: Status 404 returned error can't find the container with id dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d Jan 20 15:18:44 crc kubenswrapper[4949]: I0120 15:18:44.794894 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:44 crc kubenswrapper[4949]: E0120 15:18:44.795500 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:45 crc kubenswrapper[4949]: I0120 15:18:45.519028 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" event={"ID":"d10292b4-ea8f-4236-8d89-3b97f21a04cb","Type":"ContainerStarted","Data":"e8f99ca0e0d8a9a7fcf7d552f25de1b26b895f7528eeed64b2b6738375ccaae2"} Jan 20 15:18:45 crc kubenswrapper[4949]: I0120 15:18:45.519678 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" event={"ID":"d10292b4-ea8f-4236-8d89-3b97f21a04cb","Type":"ContainerStarted","Data":"dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d"} Jan 20 15:18:45 crc kubenswrapper[4949]: I0120 15:18:45.539680 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" podStartSLOduration=2.071206126 podStartE2EDuration="2.539654344s" podCreationTimestamp="2026-01-20 15:18:43 +0000 UTC" firstStartedPulling="2026-01-20 15:18:44.558158719 +0000 UTC m=+1720.367989577" lastFinishedPulling="2026-01-20 15:18:45.026606927 +0000 UTC m=+1720.836437795" observedRunningTime="2026-01-20 15:18:45.535881576 +0000 UTC m=+1721.345712474" watchObservedRunningTime="2026-01-20 15:18:45.539654344 +0000 UTC m=+1721.349485232" Jan 20 15:18:50 crc kubenswrapper[4949]: I0120 15:18:50.081123 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:18:50 crc kubenswrapper[4949]: I0120 15:18:50.095857 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:18:50 crc kubenswrapper[4949]: I0120 15:18:50.802470 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d68b174-da83-41e7-804c-68a858beedf7" path="/var/lib/kubelet/pods/5d68b174-da83-41e7-804c-68a858beedf7/volumes" Jan 20 15:18:55 crc kubenswrapper[4949]: I0120 15:18:55.616575 4949 generic.go:334] "Generic (PLEG): container finished" podID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" containerID="e8f99ca0e0d8a9a7fcf7d552f25de1b26b895f7528eeed64b2b6738375ccaae2" exitCode=0 Jan 20 15:18:55 crc kubenswrapper[4949]: I0120 15:18:55.616641 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" event={"ID":"d10292b4-ea8f-4236-8d89-3b97f21a04cb","Type":"ContainerDied","Data":"e8f99ca0e0d8a9a7fcf7d552f25de1b26b895f7528eeed64b2b6738375ccaae2"} Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.105746 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.149640 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") pod \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.149808 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") pod \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.149858 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") pod \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.157792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v" (OuterVolumeSpecName: "kube-api-access-cgb8v") pod "d10292b4-ea8f-4236-8d89-3b97f21a04cb" (UID: "d10292b4-ea8f-4236-8d89-3b97f21a04cb"). InnerVolumeSpecName "kube-api-access-cgb8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.178094 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d10292b4-ea8f-4236-8d89-3b97f21a04cb" (UID: "d10292b4-ea8f-4236-8d89-3b97f21a04cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.183256 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory" (OuterVolumeSpecName: "inventory") pod "d10292b4-ea8f-4236-8d89-3b97f21a04cb" (UID: "d10292b4-ea8f-4236-8d89-3b97f21a04cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.251084 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.251118 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.251131 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.645791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" event={"ID":"d10292b4-ea8f-4236-8d89-3b97f21a04cb","Type":"ContainerDied","Data":"dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d"} Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.645907 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.645918 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:58 crc kubenswrapper[4949]: I0120 15:18:58.793031 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:58 crc kubenswrapper[4949]: E0120 15:18:58.798833 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:12 crc kubenswrapper[4949]: I0120 15:19:12.066728 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:19:12 crc kubenswrapper[4949]: I0120 15:19:12.073969 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:19:12 crc kubenswrapper[4949]: I0120 15:19:12.790810 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:19:12 crc kubenswrapper[4949]: E0120 15:19:12.791298 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:12 crc kubenswrapper[4949]: I0120 15:19:12.816373 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5364ff4f-3ee5-4577-b82c-0c094bd55125" path="/var/lib/kubelet/pods/5364ff4f-3ee5-4577-b82c-0c094bd55125/volumes" Jan 20 15:19:13 crc kubenswrapper[4949]: I0120 15:19:13.069751 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:19:13 crc kubenswrapper[4949]: I0120 15:19:13.088478 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:19:14 crc kubenswrapper[4949]: I0120 15:19:14.802471 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883cbf80-263a-4fc7-b962-147019f05553" path="/var/lib/kubelet/pods/883cbf80-263a-4fc7-b962-147019f05553/volumes" Jan 20 15:19:16 crc kubenswrapper[4949]: I0120 15:19:16.911934 4949 scope.go:117] "RemoveContainer" containerID="24dbf49c8beca72a4d37ee3920737a645e4fe60fe68139ee7aef223996ccfdb6" Jan 20 15:19:16 crc kubenswrapper[4949]: I0120 15:19:16.952099 4949 scope.go:117] "RemoveContainer" containerID="3f15d6945e44e9c6e53794e87d22474ffe01f158595e34834b396d2b04dfd49c" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.029579 4949 scope.go:117] "RemoveContainer" containerID="a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.060718 4949 scope.go:117] "RemoveContainer" containerID="135008a156949889d1049508e72bc07f9183b62985200f63db1952335429a011" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.094100 4949 scope.go:117] "RemoveContainer" containerID="8b0cc583724d3b927981b50c04490bc942db17a6a69e1c60b8114fe3f564f67a" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.169967 4949 scope.go:117] "RemoveContainer" containerID="ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.185267 4949 scope.go:117] "RemoveContainer" containerID="ea9b847c91449323272eebae1f55f6d7768779cc32907a027b2a7c8dfb6cb9ec" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.219183 4949 scope.go:117] "RemoveContainer" containerID="938e9ce45ea628368ef94bbc41df4467906d61f367eb62e455efc51fc6c3edfd" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.235621 4949 scope.go:117] "RemoveContainer" containerID="1234260b184752a89b6e70a1ae59d09a4b3f7d03f7fb974dc5afeaccba79232f" Jan 20 15:19:27 crc kubenswrapper[4949]: I0120 15:19:27.789504 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:19:27 crc kubenswrapper[4949]: E0120 15:19:27.790496 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:40 crc kubenswrapper[4949]: I0120 15:19:40.789108 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:19:40 crc kubenswrapper[4949]: E0120 15:19:40.789736 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:53 crc kubenswrapper[4949]: I0120 15:19:53.789460 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:19:53 crc kubenswrapper[4949]: E0120 15:19:53.790467 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:57 crc kubenswrapper[4949]: I0120 15:19:57.040333 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:19:57 crc kubenswrapper[4949]: I0120 15:19:57.051742 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:19:58 crc kubenswrapper[4949]: I0120 15:19:58.800995 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462eb38e-1d62-43e2-92c4-1074a1c054b9" path="/var/lib/kubelet/pods/462eb38e-1d62-43e2-92c4-1074a1c054b9/volumes" Jan 20 15:20:05 crc kubenswrapper[4949]: I0120 15:20:05.788685 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:20:05 crc kubenswrapper[4949]: E0120 15:20:05.789320 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:20:17 crc kubenswrapper[4949]: I0120 15:20:17.379806 4949 scope.go:117] "RemoveContainer" containerID="4bacb42c86db9d32cafede00ec29f8308a27e34795cfa26fb587384d2da7e640" Jan 20 15:20:19 crc kubenswrapper[4949]: I0120 15:20:19.789690 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:20:19 crc kubenswrapper[4949]: E0120 15:20:19.790489 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:20:30 crc kubenswrapper[4949]: I0120 15:20:30.789982 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:20:31 crc kubenswrapper[4949]: I0120 15:20:31.535395 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c"} Jan 20 15:22:57 crc kubenswrapper[4949]: I0120 15:22:57.152652 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:22:57 crc kubenswrapper[4949]: I0120 15:22:57.153237 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:23:27 crc kubenswrapper[4949]: I0120 15:23:27.152030 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:23:27 crc kubenswrapper[4949]: I0120 15:23:27.152458 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.817615 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.835418 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.846383 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.854503 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.861584 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.870844 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.878985 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.887258 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.895230 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.903694 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.911550 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.918005 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.923627 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.929226 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.935050 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.940245 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.945814 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.951850 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.957402 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.962366 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.802455 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af1d203-d1de-4e8b-95cb-7977a46b0042" path="/var/lib/kubelet/pods/3af1d203-d1de-4e8b-95cb-7977a46b0042/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.804053 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" path="/var/lib/kubelet/pods/3b69ef09-6dac-4ebb-b970-9c94553bea5a/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.805221 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744449f9-40c5-4c12-944e-f9ff875daf40" path="/var/lib/kubelet/pods/744449f9-40c5-4c12-944e-f9ff875daf40/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.806467 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949e48ac-89ca-4f38-886e-fd951c7d7217" path="/var/lib/kubelet/pods/949e48ac-89ca-4f38-886e-fd951c7d7217/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.808438 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f6253d-b990-4892-bd1f-9534caf70130" path="/var/lib/kubelet/pods/96f6253d-b990-4892-bd1f-9534caf70130/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.809036 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b62cf27-c244-466f-bddd-129a1a3db687" path="/var/lib/kubelet/pods/9b62cf27-c244-466f-bddd-129a1a3db687/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.809609 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab73db4b-4663-4234-be64-866efa186f5a" path="/var/lib/kubelet/pods/ab73db4b-4663-4234-be64-866efa186f5a/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.810607 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" path="/var/lib/kubelet/pods/ba3f2ff6-def1-41aa-8918-32399eb1a55b/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.811108 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" path="/var/lib/kubelet/pods/d10292b4-ea8f-4236-8d89-3b97f21a04cb/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.811613 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" path="/var/lib/kubelet/pods/f8d847d1-1215-4c1c-9741-fb2dcf39e42d/volumes" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.444077 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk"] Jan 20 15:23:34 crc kubenswrapper[4949]: E0120 15:23:34.445076 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.445095 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.445482 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.446368 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.450314 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.450477 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.450891 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.451029 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.451347 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.463102 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk"] Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.603612 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.603935 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.603956 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.603991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.604030 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705776 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705869 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705898 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705972 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.713874 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.715152 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.722675 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.723035 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.724062 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.771157 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:35 crc kubenswrapper[4949]: I0120 15:23:35.301170 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk"] Jan 20 15:23:35 crc kubenswrapper[4949]: I0120 15:23:35.312782 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.224256 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" event={"ID":"f5d6330b-b87a-476b-bebc-a790026e5dd3","Type":"ContainerStarted","Data":"54b2e5fe7c30dbad96979fa5313815e89980d3ef75f992ecd91ec8271bf9fb04"} Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.498499 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.502448 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.504933 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.643764 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.643880 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.643908 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.745224 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.745323 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.745356 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.745877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.746388 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.763088 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.821385 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:37 crc kubenswrapper[4949]: I0120 15:23:37.233079 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" event={"ID":"f5d6330b-b87a-476b-bebc-a790026e5dd3","Type":"ContainerStarted","Data":"c40e9a0a3aa913e3f9829441ef7ab55b34ecce12e55538c1a729cd1d90d48dc8"} Jan 20 15:23:37 crc kubenswrapper[4949]: I0120 15:23:37.282138 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" podStartSLOduration=2.260051714 podStartE2EDuration="3.282122617s" podCreationTimestamp="2026-01-20 15:23:34 +0000 UTC" firstStartedPulling="2026-01-20 15:23:35.312580931 +0000 UTC m=+2011.122411789" lastFinishedPulling="2026-01-20 15:23:36.334651834 +0000 UTC m=+2012.144482692" observedRunningTime="2026-01-20 15:23:37.276871023 +0000 UTC m=+2013.086701881" watchObservedRunningTime="2026-01-20 15:23:37.282122617 +0000 UTC m=+2013.091953475" Jan 20 15:23:37 crc kubenswrapper[4949]: I0120 15:23:37.298019 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:37 crc kubenswrapper[4949]: W0120 15:23:37.300834 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68738f06_f8fa_40ea_8af8_9aad9957433b.slice/crio-28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c WatchSource:0}: Error finding container 28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c: Status 404 returned error can't find the container with id 28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c Jan 20 15:23:38 crc kubenswrapper[4949]: I0120 15:23:38.245831 4949 generic.go:334] "Generic (PLEG): container finished" podID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerID="3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238" exitCode=0 Jan 20 15:23:38 crc kubenswrapper[4949]: I0120 15:23:38.245920 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerDied","Data":"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238"} Jan 20 15:23:38 crc kubenswrapper[4949]: I0120 15:23:38.246102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerStarted","Data":"28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c"} Jan 20 15:23:40 crc kubenswrapper[4949]: I0120 15:23:40.271655 4949 generic.go:334] "Generic (PLEG): container finished" podID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerID="0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5" exitCode=0 Jan 20 15:23:40 crc kubenswrapper[4949]: I0120 15:23:40.271786 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerDied","Data":"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5"} Jan 20 15:23:43 crc kubenswrapper[4949]: I0120 15:23:43.322234 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerStarted","Data":"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87"} Jan 20 15:23:43 crc kubenswrapper[4949]: I0120 15:23:43.343751 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gbjkv" podStartSLOduration=2.6413070899999997 podStartE2EDuration="7.343733404s" podCreationTimestamp="2026-01-20 15:23:36 +0000 UTC" firstStartedPulling="2026-01-20 15:23:38.2479518 +0000 UTC m=+2014.057782658" lastFinishedPulling="2026-01-20 15:23:42.950378114 +0000 UTC m=+2018.760208972" observedRunningTime="2026-01-20 15:23:43.342055971 +0000 UTC m=+2019.151886839" watchObservedRunningTime="2026-01-20 15:23:43.343733404 +0000 UTC m=+2019.153564272" Jan 20 15:23:46 crc kubenswrapper[4949]: I0120 15:23:46.821917 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:46 crc kubenswrapper[4949]: I0120 15:23:46.822281 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:46 crc kubenswrapper[4949]: I0120 15:23:46.897111 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:49 crc kubenswrapper[4949]: I0120 15:23:49.380461 4949 generic.go:334] "Generic (PLEG): container finished" podID="f5d6330b-b87a-476b-bebc-a790026e5dd3" containerID="c40e9a0a3aa913e3f9829441ef7ab55b34ecce12e55538c1a729cd1d90d48dc8" exitCode=0 Jan 20 15:23:49 crc kubenswrapper[4949]: I0120 15:23:49.380548 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" event={"ID":"f5d6330b-b87a-476b-bebc-a790026e5dd3","Type":"ContainerDied","Data":"c40e9a0a3aa913e3f9829441ef7ab55b34ecce12e55538c1a729cd1d90d48dc8"} Jan 20 15:23:50 crc kubenswrapper[4949]: I0120 15:23:50.852116 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031249 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031315 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031412 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031478 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031562 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.038320 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.038379 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph" (OuterVolumeSpecName: "ceph") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.039474 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h" (OuterVolumeSpecName: "kube-api-access-lj84h") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "kube-api-access-lj84h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.062677 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory" (OuterVolumeSpecName: "inventory") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.083672 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134508 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134563 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134581 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134594 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134607 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.410839 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" event={"ID":"f5d6330b-b87a-476b-bebc-a790026e5dd3","Type":"ContainerDied","Data":"54b2e5fe7c30dbad96979fa5313815e89980d3ef75f992ecd91ec8271bf9fb04"} Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.410908 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b2e5fe7c30dbad96979fa5313815e89980d3ef75f992ecd91ec8271bf9fb04" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.410924 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.488194 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh"] Jan 20 15:23:51 crc kubenswrapper[4949]: E0120 15:23:51.488668 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d6330b-b87a-476b-bebc-a790026e5dd3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.488694 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d6330b-b87a-476b-bebc-a790026e5dd3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.488933 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d6330b-b87a-476b-bebc-a790026e5dd3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.489730 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492056 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492390 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492420 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492852 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492856 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.511189 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh"] Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642146 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642481 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642685 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744627 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744703 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744768 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744800 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744879 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.751233 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.752007 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.752084 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.752421 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.767870 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.820560 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:52 crc kubenswrapper[4949]: I0120 15:23:52.354851 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh"] Jan 20 15:23:52 crc kubenswrapper[4949]: I0120 15:23:52.427673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" event={"ID":"da7cee45-2ef4-4ebc-8067-08dbe10af76a","Type":"ContainerStarted","Data":"0ecd3ebaf4a3c0b15aa73ac23e355a64a8d90922ff7acdf6f56594962dac1f09"} Jan 20 15:23:53 crc kubenswrapper[4949]: I0120 15:23:53.437066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" event={"ID":"da7cee45-2ef4-4ebc-8067-08dbe10af76a","Type":"ContainerStarted","Data":"647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b"} Jan 20 15:23:53 crc kubenswrapper[4949]: I0120 15:23:53.461122 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" podStartSLOduration=1.949380109 podStartE2EDuration="2.461099633s" podCreationTimestamp="2026-01-20 15:23:51 +0000 UTC" firstStartedPulling="2026-01-20 15:23:52.370864098 +0000 UTC m=+2028.180694956" lastFinishedPulling="2026-01-20 15:23:52.882583602 +0000 UTC m=+2028.692414480" observedRunningTime="2026-01-20 15:23:53.455480508 +0000 UTC m=+2029.265311366" watchObservedRunningTime="2026-01-20 15:23:53.461099633 +0000 UTC m=+2029.270930491" Jan 20 15:23:56 crc kubenswrapper[4949]: I0120 15:23:56.869473 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:56 crc kubenswrapper[4949]: I0120 15:23:56.934891 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.152844 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.152919 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.152980 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.153981 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.154065 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c" gracePeriod=600 Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.475779 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c" exitCode=0 Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.475976 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gbjkv" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="registry-server" containerID="cri-o://455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" gracePeriod=2 Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.476046 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c"} Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.476078 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.919823 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.076169 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") pod \"68738f06-f8fa-40ea-8af8-9aad9957433b\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.076335 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") pod \"68738f06-f8fa-40ea-8af8-9aad9957433b\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.077174 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities" (OuterVolumeSpecName: "utilities") pod "68738f06-f8fa-40ea-8af8-9aad9957433b" (UID: "68738f06-f8fa-40ea-8af8-9aad9957433b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.077268 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") pod \"68738f06-f8fa-40ea-8af8-9aad9957433b\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.079103 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.085146 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d" (OuterVolumeSpecName: "kube-api-access-xqx6d") pod "68738f06-f8fa-40ea-8af8-9aad9957433b" (UID: "68738f06-f8fa-40ea-8af8-9aad9957433b"). InnerVolumeSpecName "kube-api-access-xqx6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.098597 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68738f06-f8fa-40ea-8af8-9aad9957433b" (UID: "68738f06-f8fa-40ea-8af8-9aad9957433b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.181069 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.181114 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.484834 4949 generic.go:334] "Generic (PLEG): container finished" podID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerID="455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" exitCode=0 Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.484904 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerDied","Data":"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87"} Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.484956 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerDied","Data":"28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c"} Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.484985 4949 scope.go:117] "RemoveContainer" containerID="455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.485558 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.491840 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223"} Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.549674 4949 scope.go:117] "RemoveContainer" containerID="0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.570168 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.574992 4949 scope.go:117] "RemoveContainer" containerID="3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.577880 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.613196 4949 scope.go:117] "RemoveContainer" containerID="455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" Jan 20 15:23:58 crc kubenswrapper[4949]: E0120 15:23:58.613684 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87\": container with ID starting with 455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87 not found: ID does not exist" containerID="455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.613718 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87"} err="failed to get container status \"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87\": rpc error: code = NotFound desc = could not find container \"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87\": container with ID starting with 455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87 not found: ID does not exist" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.613744 4949 scope.go:117] "RemoveContainer" containerID="0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5" Jan 20 15:23:58 crc kubenswrapper[4949]: E0120 15:23:58.614493 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5\": container with ID starting with 0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5 not found: ID does not exist" containerID="0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.614716 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5"} err="failed to get container status \"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5\": rpc error: code = NotFound desc = could not find container \"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5\": container with ID starting with 0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5 not found: ID does not exist" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.614818 4949 scope.go:117] "RemoveContainer" containerID="3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238" Jan 20 15:23:58 crc kubenswrapper[4949]: E0120 15:23:58.615369 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238\": container with ID starting with 3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238 not found: ID does not exist" containerID="3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.615477 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238"} err="failed to get container status \"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238\": rpc error: code = NotFound desc = could not find container \"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238\": container with ID starting with 3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238 not found: ID does not exist" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.802442 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" path="/var/lib/kubelet/pods/68738f06-f8fa-40ea-8af8-9aad9957433b/volumes" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.510381 4949 scope.go:117] "RemoveContainer" containerID="97e58c67a0e98003cafb39480a06ac591ad9fe0f96f2a75ba8e22c54e01c1684" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.542768 4949 scope.go:117] "RemoveContainer" containerID="c94adfbb3bcf494a6ed833a1ec72b7c8690cb46c6a6ede8826f4415c70fb76b3" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.633199 4949 scope.go:117] "RemoveContainer" containerID="1d63423a01df08b03bee2e370a0b400c09d1d1f14c81f27c3919b0933a8309a6" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.659436 4949 scope.go:117] "RemoveContainer" containerID="e853b33d218fefaac9eaa8c42597b4fc7a0f0c58f70fdeb9cf7e2318c41718d3" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.729188 4949 scope.go:117] "RemoveContainer" containerID="f219443796223b8979c4f2b7127d3bca2a123adb4cdd20183cbd06a84853e4d3" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.758063 4949 scope.go:117] "RemoveContainer" containerID="61d6a1c0b85df33d5d78057047d5341fbd91345414dd9c25ddd0c69c028e9f1c" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.818550 4949 scope.go:117] "RemoveContainer" containerID="0aac50da813170e2d292b4794c74f28cf8e895ea7cadf5112dc53f78c6d69624" Jan 20 15:25:17 crc kubenswrapper[4949]: I0120 15:25:17.993845 4949 scope.go:117] "RemoveContainer" containerID="e8f99ca0e0d8a9a7fcf7d552f25de1b26b895f7528eeed64b2b6738375ccaae2" Jan 20 15:25:18 crc kubenswrapper[4949]: I0120 15:25:18.040363 4949 scope.go:117] "RemoveContainer" containerID="34d9fdce66eba37a0cde8429948ec3cf10ca4344d01055582003057c7611eca1" Jan 20 15:25:18 crc kubenswrapper[4949]: I0120 15:25:18.104232 4949 scope.go:117] "RemoveContainer" containerID="56ac5c2088c598e4fcaa7786fdcab5e98d541fa419bbd7c17fa0ae992434ce82" Jan 20 15:25:56 crc kubenswrapper[4949]: E0120 15:25:56.050937 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda7cee45_2ef4_4ebc_8067_08dbe10af76a.slice/crio-647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda7cee45_2ef4_4ebc_8067_08dbe10af76a.slice/crio-conmon-647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b.scope\": RecentStats: unable to find data in memory cache]" Jan 20 15:25:56 crc kubenswrapper[4949]: I0120 15:25:56.578107 4949 generic.go:334] "Generic (PLEG): container finished" podID="da7cee45-2ef4-4ebc-8067-08dbe10af76a" containerID="647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b" exitCode=0 Jan 20 15:25:56 crc kubenswrapper[4949]: I0120 15:25:56.578242 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" event={"ID":"da7cee45-2ef4-4ebc-8067-08dbe10af76a","Type":"ContainerDied","Data":"647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b"} Jan 20 15:25:57 crc kubenswrapper[4949]: I0120 15:25:57.152134 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:25:57 crc kubenswrapper[4949]: I0120 15:25:57.152190 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.083165 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250200 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250271 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250442 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250482 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250567 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.257418 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw" (OuterVolumeSpecName: "kube-api-access-cvfmw") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "kube-api-access-cvfmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.257563 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.259425 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph" (OuterVolumeSpecName: "ceph") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.277138 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory" (OuterVolumeSpecName: "inventory") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.278304 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353286 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353324 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353339 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353351 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353362 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.598147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" event={"ID":"da7cee45-2ef4-4ebc-8067-08dbe10af76a","Type":"ContainerDied","Data":"0ecd3ebaf4a3c0b15aa73ac23e355a64a8d90922ff7acdf6f56594962dac1f09"} Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.598569 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ecd3ebaf4a3c0b15aa73ac23e355a64a8d90922ff7acdf6f56594962dac1f09" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.598218 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678346 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv"] Jan 20 15:25:58 crc kubenswrapper[4949]: E0120 15:25:58.678700 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="extract-content" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678721 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="extract-content" Jan 20 15:25:58 crc kubenswrapper[4949]: E0120 15:25:58.678736 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="registry-server" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678743 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="registry-server" Jan 20 15:25:58 crc kubenswrapper[4949]: E0120 15:25:58.678752 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7cee45-2ef4-4ebc-8067-08dbe10af76a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678760 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7cee45-2ef4-4ebc-8067-08dbe10af76a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:25:58 crc kubenswrapper[4949]: E0120 15:25:58.678776 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="extract-utilities" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678783 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="extract-utilities" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678946 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7cee45-2ef4-4ebc-8067-08dbe10af76a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678963 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="registry-server" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.679608 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.681930 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.682201 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.682367 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.682569 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.682927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.688534 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv"] Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.860887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.860942 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.860974 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.861123 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.962561 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.962612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.962645 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.962697 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.966299 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.966445 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.967296 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.980547 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.996544 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:59 crc kubenswrapper[4949]: I0120 15:25:59.578809 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv"] Jan 20 15:25:59 crc kubenswrapper[4949]: I0120 15:25:59.607639 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" event={"ID":"6951e28c-3b02-44dd-9823-d0e4d1a779d5","Type":"ContainerStarted","Data":"623c249b9edeb9c198576359b340a855c2ad68f727f565e6f53835483010c33b"} Jan 20 15:26:00 crc kubenswrapper[4949]: I0120 15:26:00.617181 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" event={"ID":"6951e28c-3b02-44dd-9823-d0e4d1a779d5","Type":"ContainerStarted","Data":"fa1a2396c5378b56d958cdc80810c2f9c6698dd639b9b94cbc5ce91408852fe7"} Jan 20 15:26:00 crc kubenswrapper[4949]: I0120 15:26:00.641148 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" podStartSLOduration=2.141157226 podStartE2EDuration="2.64112695s" podCreationTimestamp="2026-01-20 15:25:58 +0000 UTC" firstStartedPulling="2026-01-20 15:25:59.586155259 +0000 UTC m=+2155.395986117" lastFinishedPulling="2026-01-20 15:26:00.086124983 +0000 UTC m=+2155.895955841" observedRunningTime="2026-01-20 15:26:00.639188582 +0000 UTC m=+2156.449019440" watchObservedRunningTime="2026-01-20 15:26:00.64112695 +0000 UTC m=+2156.450957808" Jan 20 15:26:26 crc kubenswrapper[4949]: I0120 15:26:26.866632 4949 generic.go:334] "Generic (PLEG): container finished" podID="6951e28c-3b02-44dd-9823-d0e4d1a779d5" containerID="fa1a2396c5378b56d958cdc80810c2f9c6698dd639b9b94cbc5ce91408852fe7" exitCode=0 Jan 20 15:26:26 crc kubenswrapper[4949]: I0120 15:26:26.866751 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" event={"ID":"6951e28c-3b02-44dd-9823-d0e4d1a779d5","Type":"ContainerDied","Data":"fa1a2396c5378b56d958cdc80810c2f9c6698dd639b9b94cbc5ce91408852fe7"} Jan 20 15:26:27 crc kubenswrapper[4949]: I0120 15:26:27.152023 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:26:27 crc kubenswrapper[4949]: I0120 15:26:27.152396 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.374059 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.483154 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") pod \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.483338 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") pod \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.483379 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") pod \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.483402 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") pod \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.490667 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np" (OuterVolumeSpecName: "kube-api-access-hq2np") pod "6951e28c-3b02-44dd-9823-d0e4d1a779d5" (UID: "6951e28c-3b02-44dd-9823-d0e4d1a779d5"). InnerVolumeSpecName "kube-api-access-hq2np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.490872 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph" (OuterVolumeSpecName: "ceph") pod "6951e28c-3b02-44dd-9823-d0e4d1a779d5" (UID: "6951e28c-3b02-44dd-9823-d0e4d1a779d5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.513718 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6951e28c-3b02-44dd-9823-d0e4d1a779d5" (UID: "6951e28c-3b02-44dd-9823-d0e4d1a779d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.525452 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory" (OuterVolumeSpecName: "inventory") pod "6951e28c-3b02-44dd-9823-d0e4d1a779d5" (UID: "6951e28c-3b02-44dd-9823-d0e4d1a779d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.585460 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.585487 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.585500 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.585511 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.885285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" event={"ID":"6951e28c-3b02-44dd-9823-d0e4d1a779d5","Type":"ContainerDied","Data":"623c249b9edeb9c198576359b340a855c2ad68f727f565e6f53835483010c33b"} Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.885552 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="623c249b9edeb9c198576359b340a855c2ad68f727f565e6f53835483010c33b" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.885371 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.992470 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7"] Jan 20 15:26:28 crc kubenswrapper[4949]: E0120 15:26:28.993055 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6951e28c-3b02-44dd-9823-d0e4d1a779d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.993159 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6951e28c-3b02-44dd-9823-d0e4d1a779d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.993386 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6951e28c-3b02-44dd-9823-d0e4d1a779d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.994039 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.996461 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.996501 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.996474 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.997167 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.997342 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.007290 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7"] Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.095054 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.095152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.095193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.095306 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.197048 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.197164 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.197203 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.197260 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.204509 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.204561 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.204893 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.217120 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.322485 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.829913 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7"] Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.894812 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" event={"ID":"cb58fe7e-6a7d-46ea-82ad-02e9200e8042","Type":"ContainerStarted","Data":"ac480ed39815ab9b58878a6fa095a000358d36e2dcc4e6676d778165ddcc0f14"} Jan 20 15:26:30 crc kubenswrapper[4949]: I0120 15:26:30.906166 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" event={"ID":"cb58fe7e-6a7d-46ea-82ad-02e9200e8042","Type":"ContainerStarted","Data":"0e4a123aaf82fcf5f55ab882041725ddfd86b53e7bd4d71e8b50344716c9c44c"} Jan 20 15:26:30 crc kubenswrapper[4949]: I0120 15:26:30.943059 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" podStartSLOduration=2.422857365 podStartE2EDuration="2.942967435s" podCreationTimestamp="2026-01-20 15:26:28 +0000 UTC" firstStartedPulling="2026-01-20 15:26:29.829861316 +0000 UTC m=+2185.639692174" lastFinishedPulling="2026-01-20 15:26:30.349971346 +0000 UTC m=+2186.159802244" observedRunningTime="2026-01-20 15:26:30.924316133 +0000 UTC m=+2186.734146991" watchObservedRunningTime="2026-01-20 15:26:30.942967435 +0000 UTC m=+2186.752798333" Jan 20 15:26:35 crc kubenswrapper[4949]: I0120 15:26:35.989806 4949 generic.go:334] "Generic (PLEG): container finished" podID="cb58fe7e-6a7d-46ea-82ad-02e9200e8042" containerID="0e4a123aaf82fcf5f55ab882041725ddfd86b53e7bd4d71e8b50344716c9c44c" exitCode=0 Jan 20 15:26:35 crc kubenswrapper[4949]: I0120 15:26:35.989891 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" event={"ID":"cb58fe7e-6a7d-46ea-82ad-02e9200e8042","Type":"ContainerDied","Data":"0e4a123aaf82fcf5f55ab882041725ddfd86b53e7bd4d71e8b50344716c9c44c"} Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.423328 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.583071 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") pod \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.583316 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") pod \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.583392 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") pod \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.583596 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") pod \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.590191 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk" (OuterVolumeSpecName: "kube-api-access-7m7hk") pod "cb58fe7e-6a7d-46ea-82ad-02e9200e8042" (UID: "cb58fe7e-6a7d-46ea-82ad-02e9200e8042"). InnerVolumeSpecName "kube-api-access-7m7hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.593202 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph" (OuterVolumeSpecName: "ceph") pod "cb58fe7e-6a7d-46ea-82ad-02e9200e8042" (UID: "cb58fe7e-6a7d-46ea-82ad-02e9200e8042"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.616578 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cb58fe7e-6a7d-46ea-82ad-02e9200e8042" (UID: "cb58fe7e-6a7d-46ea-82ad-02e9200e8042"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.634167 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory" (OuterVolumeSpecName: "inventory") pod "cb58fe7e-6a7d-46ea-82ad-02e9200e8042" (UID: "cb58fe7e-6a7d-46ea-82ad-02e9200e8042"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.686320 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.686365 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.686379 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.686393 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.011331 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" event={"ID":"cb58fe7e-6a7d-46ea-82ad-02e9200e8042","Type":"ContainerDied","Data":"ac480ed39815ab9b58878a6fa095a000358d36e2dcc4e6676d778165ddcc0f14"} Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.011670 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac480ed39815ab9b58878a6fa095a000358d36e2dcc4e6676d778165ddcc0f14" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.011412 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.086051 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f"] Jan 20 15:26:38 crc kubenswrapper[4949]: E0120 15:26:38.086830 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb58fe7e-6a7d-46ea-82ad-02e9200e8042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.086924 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb58fe7e-6a7d-46ea-82ad-02e9200e8042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.087160 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb58fe7e-6a7d-46ea-82ad-02e9200e8042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.087905 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.090378 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.090880 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.091497 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.091918 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.101883 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.105752 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f"] Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.195578 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.195711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.195758 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.195812 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.297340 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.297410 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.297471 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.297508 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.302180 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.302190 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.302655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.318235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.409140 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.943713 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f"] Jan 20 15:26:39 crc kubenswrapper[4949]: I0120 15:26:39.021378 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" event={"ID":"a8ca811b-8738-49ed-b552-bdf38a5d5650","Type":"ContainerStarted","Data":"240cc9c2fcb2aab50d26320ee84f531a2489e4545f8b419b6cde8ba6fab50b2b"} Jan 20 15:26:42 crc kubenswrapper[4949]: I0120 15:26:42.103685 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" event={"ID":"a8ca811b-8738-49ed-b552-bdf38a5d5650","Type":"ContainerStarted","Data":"dd42c64705c3db9fac7bb5ebb52eecbc1da2f31000c4d0150b5155e6d7154edf"} Jan 20 15:26:42 crc kubenswrapper[4949]: I0120 15:26:42.126766 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" podStartSLOduration=1.9454792589999998 podStartE2EDuration="4.126748583s" podCreationTimestamp="2026-01-20 15:26:38 +0000 UTC" firstStartedPulling="2026-01-20 15:26:38.953441809 +0000 UTC m=+2194.763272667" lastFinishedPulling="2026-01-20 15:26:41.134711133 +0000 UTC m=+2196.944541991" observedRunningTime="2026-01-20 15:26:42.125708551 +0000 UTC m=+2197.935539409" watchObservedRunningTime="2026-01-20 15:26:42.126748583 +0000 UTC m=+2197.936579441" Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.152543 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.153444 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.153615 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.154893 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.154969 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" gracePeriod=600 Jan 20 15:26:57 crc kubenswrapper[4949]: E0120 15:26:57.281289 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:26:58 crc kubenswrapper[4949]: I0120 15:26:58.237707 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" exitCode=0 Jan 20 15:26:58 crc kubenswrapper[4949]: I0120 15:26:58.237721 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223"} Jan 20 15:26:58 crc kubenswrapper[4949]: I0120 15:26:58.238169 4949 scope.go:117] "RemoveContainer" containerID="50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c" Jan 20 15:26:58 crc kubenswrapper[4949]: I0120 15:26:58.238989 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:26:58 crc kubenswrapper[4949]: E0120 15:26:58.239395 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:27:08 crc kubenswrapper[4949]: I0120 15:27:08.788581 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:27:08 crc kubenswrapper[4949]: E0120 15:27:08.789312 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:27:19 crc kubenswrapper[4949]: I0120 15:27:19.410672 4949 generic.go:334] "Generic (PLEG): container finished" podID="a8ca811b-8738-49ed-b552-bdf38a5d5650" containerID="dd42c64705c3db9fac7bb5ebb52eecbc1da2f31000c4d0150b5155e6d7154edf" exitCode=0 Jan 20 15:27:19 crc kubenswrapper[4949]: I0120 15:27:19.410706 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" event={"ID":"a8ca811b-8738-49ed-b552-bdf38a5d5650","Type":"ContainerDied","Data":"dd42c64705c3db9fac7bb5ebb52eecbc1da2f31000c4d0150b5155e6d7154edf"} Jan 20 15:27:20 crc kubenswrapper[4949]: I0120 15:27:20.791312 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:27:20 crc kubenswrapper[4949]: E0120 15:27:20.791853 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:27:20 crc kubenswrapper[4949]: I0120 15:27:20.927648 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.062096 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") pod \"a8ca811b-8738-49ed-b552-bdf38a5d5650\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.062288 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") pod \"a8ca811b-8738-49ed-b552-bdf38a5d5650\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.062730 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") pod \"a8ca811b-8738-49ed-b552-bdf38a5d5650\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.062878 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") pod \"a8ca811b-8738-49ed-b552-bdf38a5d5650\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.067858 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph" (OuterVolumeSpecName: "ceph") pod "a8ca811b-8738-49ed-b552-bdf38a5d5650" (UID: "a8ca811b-8738-49ed-b552-bdf38a5d5650"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.070181 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt" (OuterVolumeSpecName: "kube-api-access-np8zt") pod "a8ca811b-8738-49ed-b552-bdf38a5d5650" (UID: "a8ca811b-8738-49ed-b552-bdf38a5d5650"). InnerVolumeSpecName "kube-api-access-np8zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.096073 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory" (OuterVolumeSpecName: "inventory") pod "a8ca811b-8738-49ed-b552-bdf38a5d5650" (UID: "a8ca811b-8738-49ed-b552-bdf38a5d5650"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.133072 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8ca811b-8738-49ed-b552-bdf38a5d5650" (UID: "a8ca811b-8738-49ed-b552-bdf38a5d5650"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.165607 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.165867 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.165993 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.166102 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.427589 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" event={"ID":"a8ca811b-8738-49ed-b552-bdf38a5d5650","Type":"ContainerDied","Data":"240cc9c2fcb2aab50d26320ee84f531a2489e4545f8b419b6cde8ba6fab50b2b"} Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.427631 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240cc9c2fcb2aab50d26320ee84f531a2489e4545f8b419b6cde8ba6fab50b2b" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.427649 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.530347 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv"] Jan 20 15:27:21 crc kubenswrapper[4949]: E0120 15:27:21.534314 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ca811b-8738-49ed-b552-bdf38a5d5650" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.534352 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ca811b-8738-49ed-b552-bdf38a5d5650" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.534857 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ca811b-8738-49ed-b552-bdf38a5d5650" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.535852 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.555720 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.555980 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.556158 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.556373 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.557296 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.570298 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv"] Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.574806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.574889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.574985 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.575038 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.676486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.677178 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.677243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.677324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.680781 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.680854 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.681243 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.700131 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.872024 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:22 crc kubenswrapper[4949]: I0120 15:27:22.422194 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv"] Jan 20 15:27:22 crc kubenswrapper[4949]: I0120 15:27:22.435176 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" event={"ID":"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb","Type":"ContainerStarted","Data":"03edd3d91f96ac4d80b00a7cb825470f16f375f292ba6c2d4685a36a627d3a4a"} Jan 20 15:27:24 crc kubenswrapper[4949]: I0120 15:27:24.450773 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" event={"ID":"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb","Type":"ContainerStarted","Data":"868adfe83b8ff434ae85bc43209014db32f20152473b060d68f2881537174d9d"} Jan 20 15:27:24 crc kubenswrapper[4949]: I0120 15:27:24.479436 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" podStartSLOduration=2.568294271 podStartE2EDuration="3.479404469s" podCreationTimestamp="2026-01-20 15:27:21 +0000 UTC" firstStartedPulling="2026-01-20 15:27:22.425754579 +0000 UTC m=+2238.235585447" lastFinishedPulling="2026-01-20 15:27:23.336864787 +0000 UTC m=+2239.146695645" observedRunningTime="2026-01-20 15:27:24.470291055 +0000 UTC m=+2240.280121963" watchObservedRunningTime="2026-01-20 15:27:24.479404469 +0000 UTC m=+2240.289235367" Jan 20 15:27:28 crc kubenswrapper[4949]: I0120 15:27:28.486777 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" containerID="868adfe83b8ff434ae85bc43209014db32f20152473b060d68f2881537174d9d" exitCode=0 Jan 20 15:27:28 crc kubenswrapper[4949]: I0120 15:27:28.487255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" event={"ID":"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb","Type":"ContainerDied","Data":"868adfe83b8ff434ae85bc43209014db32f20152473b060d68f2881537174d9d"} Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.026851 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.066417 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") pod \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.066561 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") pod \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.066658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") pod \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.066802 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") pod \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.072583 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph" (OuterVolumeSpecName: "ceph") pod "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" (UID: "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.072923 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c" (OuterVolumeSpecName: "kube-api-access-67z6c") pod "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" (UID: "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb"). InnerVolumeSpecName "kube-api-access-67z6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.109830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory" (OuterVolumeSpecName: "inventory") pod "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" (UID: "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.111439 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" (UID: "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.168927 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.168970 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.168983 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.168991 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.505341 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" event={"ID":"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb","Type":"ContainerDied","Data":"03edd3d91f96ac4d80b00a7cb825470f16f375f292ba6c2d4685a36a627d3a4a"} Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.505394 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03edd3d91f96ac4d80b00a7cb825470f16f375f292ba6c2d4685a36a627d3a4a" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.505423 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.581145 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc"] Jan 20 15:27:30 crc kubenswrapper[4949]: E0120 15:27:30.581675 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.581692 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.581896 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.582611 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.588759 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.588769 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.589099 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.589147 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.589188 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.591509 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc"] Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.677830 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.677897 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.677942 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.677978 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.779607 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.779697 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.779809 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.779852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.783594 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.783594 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.784778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.804242 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.948418 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:31 crc kubenswrapper[4949]: I0120 15:27:31.515442 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc"] Jan 20 15:27:32 crc kubenswrapper[4949]: I0120 15:27:32.521170 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" event={"ID":"aa357e67-831a-4584-bf56-0c2e58d1aed8","Type":"ContainerStarted","Data":"890abcd23d68f02ea5ffe3182369199eb6f2925bc2abdb6de4c55f6d08ddb80b"} Jan 20 15:27:33 crc kubenswrapper[4949]: I0120 15:27:33.528628 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" event={"ID":"aa357e67-831a-4584-bf56-0c2e58d1aed8","Type":"ContainerStarted","Data":"f72a53123021c4be6a338015dab539da7dbe5be416a4ebc1b067696c9253b4af"} Jan 20 15:27:33 crc kubenswrapper[4949]: I0120 15:27:33.548530 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" podStartSLOduration=2.499596375 podStartE2EDuration="3.548501724s" podCreationTimestamp="2026-01-20 15:27:30 +0000 UTC" firstStartedPulling="2026-01-20 15:27:31.520848465 +0000 UTC m=+2247.330679323" lastFinishedPulling="2026-01-20 15:27:32.569753814 +0000 UTC m=+2248.379584672" observedRunningTime="2026-01-20 15:27:33.544885255 +0000 UTC m=+2249.354716113" watchObservedRunningTime="2026-01-20 15:27:33.548501724 +0000 UTC m=+2249.358332582" Jan 20 15:27:35 crc kubenswrapper[4949]: I0120 15:27:35.789371 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:27:35 crc kubenswrapper[4949]: E0120 15:27:35.790058 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:27:48 crc kubenswrapper[4949]: I0120 15:27:48.788984 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:27:48 crc kubenswrapper[4949]: E0120 15:27:48.789841 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:01 crc kubenswrapper[4949]: I0120 15:28:01.789313 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:01 crc kubenswrapper[4949]: E0120 15:28:01.790049 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:14 crc kubenswrapper[4949]: I0120 15:28:14.795488 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:14 crc kubenswrapper[4949]: E0120 15:28:14.796382 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:21 crc kubenswrapper[4949]: I0120 15:28:21.344586 4949 generic.go:334] "Generic (PLEG): container finished" podID="aa357e67-831a-4584-bf56-0c2e58d1aed8" containerID="f72a53123021c4be6a338015dab539da7dbe5be416a4ebc1b067696c9253b4af" exitCode=0 Jan 20 15:28:21 crc kubenswrapper[4949]: I0120 15:28:21.344686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" event={"ID":"aa357e67-831a-4584-bf56-0c2e58d1aed8","Type":"ContainerDied","Data":"f72a53123021c4be6a338015dab539da7dbe5be416a4ebc1b067696c9253b4af"} Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.704744 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.816220 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") pod \"aa357e67-831a-4584-bf56-0c2e58d1aed8\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.816343 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") pod \"aa357e67-831a-4584-bf56-0c2e58d1aed8\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.816390 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") pod \"aa357e67-831a-4584-bf56-0c2e58d1aed8\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.816485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") pod \"aa357e67-831a-4584-bf56-0c2e58d1aed8\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.824702 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph" (OuterVolumeSpecName: "ceph") pod "aa357e67-831a-4584-bf56-0c2e58d1aed8" (UID: "aa357e67-831a-4584-bf56-0c2e58d1aed8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.829974 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd" (OuterVolumeSpecName: "kube-api-access-mvgvd") pod "aa357e67-831a-4584-bf56-0c2e58d1aed8" (UID: "aa357e67-831a-4584-bf56-0c2e58d1aed8"). InnerVolumeSpecName "kube-api-access-mvgvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.846344 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory" (OuterVolumeSpecName: "inventory") pod "aa357e67-831a-4584-bf56-0c2e58d1aed8" (UID: "aa357e67-831a-4584-bf56-0c2e58d1aed8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.849804 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aa357e67-831a-4584-bf56-0c2e58d1aed8" (UID: "aa357e67-831a-4584-bf56-0c2e58d1aed8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.918112 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.918137 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.918147 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.918155 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.375484 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" event={"ID":"aa357e67-831a-4584-bf56-0c2e58d1aed8","Type":"ContainerDied","Data":"890abcd23d68f02ea5ffe3182369199eb6f2925bc2abdb6de4c55f6d08ddb80b"} Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.375863 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="890abcd23d68f02ea5ffe3182369199eb6f2925bc2abdb6de4c55f6d08ddb80b" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.375595 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.477285 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6z8gd"] Jan 20 15:28:23 crc kubenswrapper[4949]: E0120 15:28:23.477819 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa357e67-831a-4584-bf56-0c2e58d1aed8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.477844 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa357e67-831a-4584-bf56-0c2e58d1aed8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.478123 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa357e67-831a-4584-bf56-0c2e58d1aed8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.478944 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.480886 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.481300 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.481783 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.481998 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.483699 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.487228 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6z8gd"] Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.527723 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.528007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.528122 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.528231 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.629931 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.630301 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.630356 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.630391 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.636715 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.636781 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.641046 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.651931 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.805563 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:24 crc kubenswrapper[4949]: I0120 15:28:24.345894 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6z8gd"] Jan 20 15:28:24 crc kubenswrapper[4949]: I0120 15:28:24.383801 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" event={"ID":"53b63ff2-c70c-4429-99c6-759d0eb33ae9","Type":"ContainerStarted","Data":"974eb2867a71ee108a5247674defa3d98821634338938164dfe279741f7a9a70"} Jan 20 15:28:25 crc kubenswrapper[4949]: I0120 15:28:25.394315 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" event={"ID":"53b63ff2-c70c-4429-99c6-759d0eb33ae9","Type":"ContainerStarted","Data":"36cdec4733e97efed0669b939674294e794d961eda5e6f7eafee57684a0680f7"} Jan 20 15:28:25 crc kubenswrapper[4949]: I0120 15:28:25.411262 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" podStartSLOduration=1.890981735 podStartE2EDuration="2.411248408s" podCreationTimestamp="2026-01-20 15:28:23 +0000 UTC" firstStartedPulling="2026-01-20 15:28:24.350107932 +0000 UTC m=+2300.159938790" lastFinishedPulling="2026-01-20 15:28:24.870374565 +0000 UTC m=+2300.680205463" observedRunningTime="2026-01-20 15:28:25.408346631 +0000 UTC m=+2301.218177489" watchObservedRunningTime="2026-01-20 15:28:25.411248408 +0000 UTC m=+2301.221079266" Jan 20 15:28:27 crc kubenswrapper[4949]: I0120 15:28:27.789430 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:27 crc kubenswrapper[4949]: E0120 15:28:27.790157 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.378148 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.382680 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.416866 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.466118 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.466311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.466347 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.567567 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.567706 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.567730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.568335 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.568510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.590027 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.704765 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:30 crc kubenswrapper[4949]: I0120 15:28:30.312659 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:30 crc kubenswrapper[4949]: I0120 15:28:30.453127 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerStarted","Data":"653caeb758c7ac046112bc5240ea531cc0b62948a13a3b02070006b005805b4c"} Jan 20 15:28:31 crc kubenswrapper[4949]: I0120 15:28:31.462050 4949 generic.go:334] "Generic (PLEG): container finished" podID="a0beddb2-34aa-4859-b114-03e9876f9722" containerID="1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa" exitCode=0 Jan 20 15:28:31 crc kubenswrapper[4949]: I0120 15:28:31.462115 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerDied","Data":"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa"} Jan 20 15:28:32 crc kubenswrapper[4949]: I0120 15:28:32.474116 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerStarted","Data":"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43"} Jan 20 15:28:32 crc kubenswrapper[4949]: I0120 15:28:32.930249 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:32 crc kubenswrapper[4949]: I0120 15:28:32.931952 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:32 crc kubenswrapper[4949]: I0120 15:28:32.958286 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.040196 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.040373 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.040484 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.142766 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.142918 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.142987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.143686 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.143755 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.169902 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.259060 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.484571 4949 generic.go:334] "Generic (PLEG): container finished" podID="a0beddb2-34aa-4859-b114-03e9876f9722" containerID="035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43" exitCode=0 Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.484602 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerDied","Data":"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43"} Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.776486 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:33 crc kubenswrapper[4949]: W0120 15:28:33.783366 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06d1d4bb_9d56_4ee1_8afb_bedaedd08a16.slice/crio-aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae WatchSource:0}: Error finding container aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae: Status 404 returned error can't find the container with id aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae Jan 20 15:28:34 crc kubenswrapper[4949]: I0120 15:28:34.503350 4949 generic.go:334] "Generic (PLEG): container finished" podID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerID="5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085" exitCode=0 Jan 20 15:28:34 crc kubenswrapper[4949]: I0120 15:28:34.503422 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerDied","Data":"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085"} Jan 20 15:28:34 crc kubenswrapper[4949]: I0120 15:28:34.503703 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerStarted","Data":"aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae"} Jan 20 15:28:35 crc kubenswrapper[4949]: I0120 15:28:35.517306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerStarted","Data":"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31"} Jan 20 15:28:35 crc kubenswrapper[4949]: I0120 15:28:35.519791 4949 generic.go:334] "Generic (PLEG): container finished" podID="53b63ff2-c70c-4429-99c6-759d0eb33ae9" containerID="36cdec4733e97efed0669b939674294e794d961eda5e6f7eafee57684a0680f7" exitCode=0 Jan 20 15:28:35 crc kubenswrapper[4949]: I0120 15:28:35.519857 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" event={"ID":"53b63ff2-c70c-4429-99c6-759d0eb33ae9","Type":"ContainerDied","Data":"36cdec4733e97efed0669b939674294e794d961eda5e6f7eafee57684a0680f7"} Jan 20 15:28:35 crc kubenswrapper[4949]: I0120 15:28:35.546681 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8w6nr" podStartSLOduration=3.723345026 podStartE2EDuration="6.546654874s" podCreationTimestamp="2026-01-20 15:28:29 +0000 UTC" firstStartedPulling="2026-01-20 15:28:31.465997505 +0000 UTC m=+2307.275828363" lastFinishedPulling="2026-01-20 15:28:34.289307353 +0000 UTC m=+2310.099138211" observedRunningTime="2026-01-20 15:28:35.543617973 +0000 UTC m=+2311.353448891" watchObservedRunningTime="2026-01-20 15:28:35.546654874 +0000 UTC m=+2311.356485752" Jan 20 15:28:36 crc kubenswrapper[4949]: I0120 15:28:36.534234 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerStarted","Data":"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4"} Jan 20 15:28:36 crc kubenswrapper[4949]: I0120 15:28:36.923126 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.016143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") pod \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.016214 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") pod \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.016310 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") pod \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.016380 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") pod \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.021762 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph" (OuterVolumeSpecName: "ceph") pod "53b63ff2-c70c-4429-99c6-759d0eb33ae9" (UID: "53b63ff2-c70c-4429-99c6-759d0eb33ae9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.021910 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk" (OuterVolumeSpecName: "kube-api-access-xrmhk") pod "53b63ff2-c70c-4429-99c6-759d0eb33ae9" (UID: "53b63ff2-c70c-4429-99c6-759d0eb33ae9"). InnerVolumeSpecName "kube-api-access-xrmhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.045483 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "53b63ff2-c70c-4429-99c6-759d0eb33ae9" (UID: "53b63ff2-c70c-4429-99c6-759d0eb33ae9"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.049692 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "53b63ff2-c70c-4429-99c6-759d0eb33ae9" (UID: "53b63ff2-c70c-4429-99c6-759d0eb33ae9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.117642 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.117682 4949 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.117701 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.117714 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.542344 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" event={"ID":"53b63ff2-c70c-4429-99c6-759d0eb33ae9","Type":"ContainerDied","Data":"974eb2867a71ee108a5247674defa3d98821634338938164dfe279741f7a9a70"} Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.542395 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974eb2867a71ee108a5247674defa3d98821634338938164dfe279741f7a9a70" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.542427 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.675447 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d"] Jan 20 15:28:37 crc kubenswrapper[4949]: E0120 15:28:37.675877 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b63ff2-c70c-4429-99c6-759d0eb33ae9" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.675893 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b63ff2-c70c-4429-99c6-759d0eb33ae9" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.676055 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b63ff2-c70c-4429-99c6-759d0eb33ae9" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.676708 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.679596 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.679854 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.694926 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.694933 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.695116 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.695245 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d"] Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.742265 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.742335 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.742392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.742432 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.844091 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.844191 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.844963 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.845062 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.849593 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.851904 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.860923 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.864446 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.003626 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.398325 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d"] Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.407644 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.550128 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" event={"ID":"4d06892f-967c-4bd9-ac54-c36c80e3df73","Type":"ContainerStarted","Data":"932cd2d103f15668ad771486432ec076817ee64314c5dd52315bdac5cf51d072"} Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.789409 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:38 crc kubenswrapper[4949]: E0120 15:28:38.789999 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.566204 4949 generic.go:334] "Generic (PLEG): container finished" podID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerID="fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4" exitCode=0 Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.566306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerDied","Data":"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4"} Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.705430 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.705508 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.757771 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:40 crc kubenswrapper[4949]: I0120 15:28:40.626089 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:40 crc kubenswrapper[4949]: I0120 15:28:40.921639 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:41 crc kubenswrapper[4949]: I0120 15:28:41.586792 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerStarted","Data":"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1"} Jan 20 15:28:41 crc kubenswrapper[4949]: I0120 15:28:41.588442 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" event={"ID":"4d06892f-967c-4bd9-ac54-c36c80e3df73","Type":"ContainerStarted","Data":"157d212863df21d0aa58c275f8ef17f4d8b9442b2f3e882fbaf57d05388f3ce4"} Jan 20 15:28:41 crc kubenswrapper[4949]: I0120 15:28:41.611494 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fltdx" podStartSLOduration=3.141466698 podStartE2EDuration="9.611454684s" podCreationTimestamp="2026-01-20 15:28:32 +0000 UTC" firstStartedPulling="2026-01-20 15:28:34.504945792 +0000 UTC m=+2310.314776660" lastFinishedPulling="2026-01-20 15:28:40.974933788 +0000 UTC m=+2316.784764646" observedRunningTime="2026-01-20 15:28:41.605186905 +0000 UTC m=+2317.415017773" watchObservedRunningTime="2026-01-20 15:28:41.611454684 +0000 UTC m=+2317.421285542" Jan 20 15:28:41 crc kubenswrapper[4949]: I0120 15:28:41.627583 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" podStartSLOduration=2.234628803 podStartE2EDuration="4.627564298s" podCreationTimestamp="2026-01-20 15:28:37 +0000 UTC" firstStartedPulling="2026-01-20 15:28:38.407339104 +0000 UTC m=+2314.217169962" lastFinishedPulling="2026-01-20 15:28:40.800274559 +0000 UTC m=+2316.610105457" observedRunningTime="2026-01-20 15:28:41.62232351 +0000 UTC m=+2317.432154368" watchObservedRunningTime="2026-01-20 15:28:41.627564298 +0000 UTC m=+2317.437395156" Jan 20 15:28:42 crc kubenswrapper[4949]: I0120 15:28:42.595836 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8w6nr" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="registry-server" containerID="cri-o://db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" gracePeriod=2 Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.065874 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.159076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") pod \"a0beddb2-34aa-4859-b114-03e9876f9722\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.159275 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") pod \"a0beddb2-34aa-4859-b114-03e9876f9722\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.159356 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") pod \"a0beddb2-34aa-4859-b114-03e9876f9722\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.159836 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities" (OuterVolumeSpecName: "utilities") pod "a0beddb2-34aa-4859-b114-03e9876f9722" (UID: "a0beddb2-34aa-4859-b114-03e9876f9722"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.165268 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd" (OuterVolumeSpecName: "kube-api-access-nzhxd") pod "a0beddb2-34aa-4859-b114-03e9876f9722" (UID: "a0beddb2-34aa-4859-b114-03e9876f9722"). InnerVolumeSpecName "kube-api-access-nzhxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.206027 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0beddb2-34aa-4859-b114-03e9876f9722" (UID: "a0beddb2-34aa-4859-b114-03e9876f9722"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.259915 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.260670 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.261198 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.261215 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.261225 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605599 4949 generic.go:334] "Generic (PLEG): container finished" podID="a0beddb2-34aa-4859-b114-03e9876f9722" containerID="db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" exitCode=0 Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605659 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605703 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerDied","Data":"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31"} Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerDied","Data":"653caeb758c7ac046112bc5240ea531cc0b62948a13a3b02070006b005805b4c"} Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605792 4949 scope.go:117] "RemoveContainer" containerID="db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.651094 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.660040 4949 scope.go:117] "RemoveContainer" containerID="035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.662048 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.690852 4949 scope.go:117] "RemoveContainer" containerID="1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.735262 4949 scope.go:117] "RemoveContainer" containerID="db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" Jan 20 15:28:43 crc kubenswrapper[4949]: E0120 15:28:43.735803 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31\": container with ID starting with db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31 not found: ID does not exist" containerID="db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.735833 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31"} err="failed to get container status \"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31\": rpc error: code = NotFound desc = could not find container \"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31\": container with ID starting with db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31 not found: ID does not exist" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.735853 4949 scope.go:117] "RemoveContainer" containerID="035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43" Jan 20 15:28:43 crc kubenswrapper[4949]: E0120 15:28:43.736219 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43\": container with ID starting with 035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43 not found: ID does not exist" containerID="035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.736341 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43"} err="failed to get container status \"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43\": rpc error: code = NotFound desc = could not find container \"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43\": container with ID starting with 035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43 not found: ID does not exist" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.736480 4949 scope.go:117] "RemoveContainer" containerID="1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa" Jan 20 15:28:43 crc kubenswrapper[4949]: E0120 15:28:43.736992 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa\": container with ID starting with 1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa not found: ID does not exist" containerID="1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.737022 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa"} err="failed to get container status \"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa\": rpc error: code = NotFound desc = could not find container \"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa\": container with ID starting with 1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa not found: ID does not exist" Jan 20 15:28:44 crc kubenswrapper[4949]: I0120 15:28:44.344333 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fltdx" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" probeResult="failure" output=< Jan 20 15:28:44 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:28:44 crc kubenswrapper[4949]: > Jan 20 15:28:44 crc kubenswrapper[4949]: I0120 15:28:44.801924 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" path="/var/lib/kubelet/pods/a0beddb2-34aa-4859-b114-03e9876f9722/volumes" Jan 20 15:28:48 crc kubenswrapper[4949]: I0120 15:28:48.650941 4949 generic.go:334] "Generic (PLEG): container finished" podID="4d06892f-967c-4bd9-ac54-c36c80e3df73" containerID="157d212863df21d0aa58c275f8ef17f4d8b9442b2f3e882fbaf57d05388f3ce4" exitCode=0 Jan 20 15:28:48 crc kubenswrapper[4949]: I0120 15:28:48.651041 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" event={"ID":"4d06892f-967c-4bd9-ac54-c36c80e3df73","Type":"ContainerDied","Data":"157d212863df21d0aa58c275f8ef17f4d8b9442b2f3e882fbaf57d05388f3ce4"} Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.070239 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.233043 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") pod \"4d06892f-967c-4bd9-ac54-c36c80e3df73\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.233085 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") pod \"4d06892f-967c-4bd9-ac54-c36c80e3df73\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.233138 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") pod \"4d06892f-967c-4bd9-ac54-c36c80e3df73\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.233337 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") pod \"4d06892f-967c-4bd9-ac54-c36c80e3df73\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.239496 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4" (OuterVolumeSpecName: "kube-api-access-9crq4") pod "4d06892f-967c-4bd9-ac54-c36c80e3df73" (UID: "4d06892f-967c-4bd9-ac54-c36c80e3df73"). InnerVolumeSpecName "kube-api-access-9crq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.241231 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph" (OuterVolumeSpecName: "ceph") pod "4d06892f-967c-4bd9-ac54-c36c80e3df73" (UID: "4d06892f-967c-4bd9-ac54-c36c80e3df73"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.264380 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d06892f-967c-4bd9-ac54-c36c80e3df73" (UID: "4d06892f-967c-4bd9-ac54-c36c80e3df73"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.269204 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory" (OuterVolumeSpecName: "inventory") pod "4d06892f-967c-4bd9-ac54-c36c80e3df73" (UID: "4d06892f-967c-4bd9-ac54-c36c80e3df73"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.335827 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.335885 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.335965 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.335978 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.670155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" event={"ID":"4d06892f-967c-4bd9-ac54-c36c80e3df73","Type":"ContainerDied","Data":"932cd2d103f15668ad771486432ec076817ee64314c5dd52315bdac5cf51d072"} Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.670499 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="932cd2d103f15668ad771486432ec076817ee64314c5dd52315bdac5cf51d072" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.670260 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.794306 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.794552 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.829606 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5"] Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.830180 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d06892f-967c-4bd9-ac54-c36c80e3df73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830206 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d06892f-967c-4bd9-ac54-c36c80e3df73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.830221 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="extract-utilities" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830231 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="extract-utilities" Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.830266 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="extract-content" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830276 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="extract-content" Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.830299 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="registry-server" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830308 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="registry-server" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830546 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d06892f-967c-4bd9-ac54-c36c80e3df73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830595 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="registry-server" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.831425 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.834391 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.834671 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5"] Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.834732 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.834869 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.836486 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.836834 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.957194 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.957284 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.957668 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.957742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.059870 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.059928 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.059999 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.060019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.064510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.064655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.066242 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.076044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.155029 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.698978 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5"] Jan 20 15:28:52 crc kubenswrapper[4949]: I0120 15:28:52.693252 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" event={"ID":"3b31ae29-db74-4104-b8b5-377bfa3f766a","Type":"ContainerStarted","Data":"2bb7da0c357aafa5653293ede63d63a9faed2b781c862a009a44845623b5b8a0"} Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.340050 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.390680 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.586488 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.704269 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" event={"ID":"3b31ae29-db74-4104-b8b5-377bfa3f766a","Type":"ContainerStarted","Data":"06f7ca42c9bf56fefceddad7c492908989c919da063ce926c55d53dece166f11"} Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.727891 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" podStartSLOduration=2.870842414 podStartE2EDuration="3.727874527s" podCreationTimestamp="2026-01-20 15:28:50 +0000 UTC" firstStartedPulling="2026-01-20 15:28:51.705123497 +0000 UTC m=+2327.514954355" lastFinishedPulling="2026-01-20 15:28:52.56215561 +0000 UTC m=+2328.371986468" observedRunningTime="2026-01-20 15:28:53.723843837 +0000 UTC m=+2329.533674735" watchObservedRunningTime="2026-01-20 15:28:53.727874527 +0000 UTC m=+2329.537705385" Jan 20 15:28:54 crc kubenswrapper[4949]: I0120 15:28:54.711508 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fltdx" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" containerID="cri-o://87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" gracePeriod=2 Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.302012 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.432973 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") pod \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.433136 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") pod \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.433221 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") pod \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.433886 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities" (OuterVolumeSpecName: "utilities") pod "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" (UID: "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.440816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5" (OuterVolumeSpecName: "kube-api-access-5qsb5") pod "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" (UID: "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16"). InnerVolumeSpecName "kube-api-access-5qsb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.537420 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.537456 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.556489 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" (UID: "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.639756 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719772 4949 generic.go:334] "Generic (PLEG): container finished" podID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerID="87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" exitCode=0 Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719813 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerDied","Data":"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1"} Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719840 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerDied","Data":"aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae"} Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719857 4949 scope.go:117] "RemoveContainer" containerID="87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719854 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.745458 4949 scope.go:117] "RemoveContainer" containerID="fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.756856 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.768341 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.785917 4949 scope.go:117] "RemoveContainer" containerID="5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.809935 4949 scope.go:117] "RemoveContainer" containerID="87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" Jan 20 15:28:55 crc kubenswrapper[4949]: E0120 15:28:55.810460 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1\": container with ID starting with 87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1 not found: ID does not exist" containerID="87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.810498 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1"} err="failed to get container status \"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1\": rpc error: code = NotFound desc = could not find container \"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1\": container with ID starting with 87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1 not found: ID does not exist" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.810536 4949 scope.go:117] "RemoveContainer" containerID="fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4" Jan 20 15:28:55 crc kubenswrapper[4949]: E0120 15:28:55.811092 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4\": container with ID starting with fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4 not found: ID does not exist" containerID="fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.811139 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4"} err="failed to get container status \"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4\": rpc error: code = NotFound desc = could not find container \"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4\": container with ID starting with fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4 not found: ID does not exist" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.811170 4949 scope.go:117] "RemoveContainer" containerID="5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085" Jan 20 15:28:55 crc kubenswrapper[4949]: E0120 15:28:55.811567 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085\": container with ID starting with 5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085 not found: ID does not exist" containerID="5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.811598 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085"} err="failed to get container status \"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085\": rpc error: code = NotFound desc = could not find container \"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085\": container with ID starting with 5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085 not found: ID does not exist" Jan 20 15:28:56 crc kubenswrapper[4949]: I0120 15:28:56.798763 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" path="/var/lib/kubelet/pods/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16/volumes" Jan 20 15:29:01 crc kubenswrapper[4949]: I0120 15:29:01.788692 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:01 crc kubenswrapper[4949]: E0120 15:29:01.789629 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:02 crc kubenswrapper[4949]: I0120 15:29:02.796954 4949 generic.go:334] "Generic (PLEG): container finished" podID="3b31ae29-db74-4104-b8b5-377bfa3f766a" containerID="06f7ca42c9bf56fefceddad7c492908989c919da063ce926c55d53dece166f11" exitCode=0 Jan 20 15:29:02 crc kubenswrapper[4949]: I0120 15:29:02.809443 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" event={"ID":"3b31ae29-db74-4104-b8b5-377bfa3f766a","Type":"ContainerDied","Data":"06f7ca42c9bf56fefceddad7c492908989c919da063ce926c55d53dece166f11"} Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.240873 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.313207 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") pod \"3b31ae29-db74-4104-b8b5-377bfa3f766a\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.313330 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") pod \"3b31ae29-db74-4104-b8b5-377bfa3f766a\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.313508 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") pod \"3b31ae29-db74-4104-b8b5-377bfa3f766a\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.313542 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") pod \"3b31ae29-db74-4104-b8b5-377bfa3f766a\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.321368 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn" (OuterVolumeSpecName: "kube-api-access-2gvbn") pod "3b31ae29-db74-4104-b8b5-377bfa3f766a" (UID: "3b31ae29-db74-4104-b8b5-377bfa3f766a"). InnerVolumeSpecName "kube-api-access-2gvbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.323852 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph" (OuterVolumeSpecName: "ceph") pod "3b31ae29-db74-4104-b8b5-377bfa3f766a" (UID: "3b31ae29-db74-4104-b8b5-377bfa3f766a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.346856 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b31ae29-db74-4104-b8b5-377bfa3f766a" (UID: "3b31ae29-db74-4104-b8b5-377bfa3f766a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.354161 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory" (OuterVolumeSpecName: "inventory") pod "3b31ae29-db74-4104-b8b5-377bfa3f766a" (UID: "3b31ae29-db74-4104-b8b5-377bfa3f766a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.416100 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.416149 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.416167 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.416178 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.823939 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" event={"ID":"3b31ae29-db74-4104-b8b5-377bfa3f766a","Type":"ContainerDied","Data":"2bb7da0c357aafa5653293ede63d63a9faed2b781c862a009a44845623b5b8a0"} Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.824209 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bb7da0c357aafa5653293ede63d63a9faed2b781c862a009a44845623b5b8a0" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.824082 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.922271 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb"] Jan 20 15:29:04 crc kubenswrapper[4949]: E0120 15:29:04.922880 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b31ae29-db74-4104-b8b5-377bfa3f766a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.922950 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b31ae29-db74-4104-b8b5-377bfa3f766a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:04 crc kubenswrapper[4949]: E0120 15:29:04.923031 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="extract-utilities" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923090 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="extract-utilities" Jan 20 15:29:04 crc kubenswrapper[4949]: E0120 15:29:04.923149 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="extract-content" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923199 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="extract-content" Jan 20 15:29:04 crc kubenswrapper[4949]: E0120 15:29:04.923266 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923336 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923595 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b31ae29-db74-4104-b8b5-377bfa3f766a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923673 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.924342 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927380 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927378 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927723 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927739 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927844 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927909 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927969 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.928443 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.933351 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb"] Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.024896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.024976 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025148 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025251 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025479 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025943 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026068 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026169 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026348 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026417 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026598 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.127721 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.128128 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.128171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.128224 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.128896 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.129146 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.129218 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.129346 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.130262 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.131161 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.131344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.131454 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.131612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.132128 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.134382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.134755 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.135391 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.135877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.136583 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.137048 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.137403 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.137410 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.138347 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.139993 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.143870 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.150089 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.240080 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.824789 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb"] Jan 20 15:29:06 crc kubenswrapper[4949]: I0120 15:29:06.847968 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" event={"ID":"d1ff69ad-f42e-4882-a580-c2fc212ab3a4","Type":"ContainerStarted","Data":"2552adbd9467a9c5b9482358b1855362c05f66b637e8f2f5b2886982d235b0b5"} Jan 20 15:29:07 crc kubenswrapper[4949]: I0120 15:29:07.857205 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" event={"ID":"d1ff69ad-f42e-4882-a580-c2fc212ab3a4","Type":"ContainerStarted","Data":"d83d45895b8ab8e4573ba4fee2d94606703a13c2a6437f78e9cc3391c8ec8859"} Jan 20 15:29:07 crc kubenswrapper[4949]: I0120 15:29:07.878839 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" podStartSLOduration=2.828716324 podStartE2EDuration="3.87882253s" podCreationTimestamp="2026-01-20 15:29:04 +0000 UTC" firstStartedPulling="2026-01-20 15:29:05.841919418 +0000 UTC m=+2341.651750276" lastFinishedPulling="2026-01-20 15:29:06.892025624 +0000 UTC m=+2342.701856482" observedRunningTime="2026-01-20 15:29:07.875553448 +0000 UTC m=+2343.685384316" watchObservedRunningTime="2026-01-20 15:29:07.87882253 +0000 UTC m=+2343.688653388" Jan 20 15:29:12 crc kubenswrapper[4949]: I0120 15:29:12.789755 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:12 crc kubenswrapper[4949]: E0120 15:29:12.790669 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:25 crc kubenswrapper[4949]: I0120 15:29:25.789692 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:25 crc kubenswrapper[4949]: E0120 15:29:25.790633 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:37 crc kubenswrapper[4949]: I0120 15:29:37.789133 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:37 crc kubenswrapper[4949]: E0120 15:29:37.789867 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:42 crc kubenswrapper[4949]: I0120 15:29:42.176779 4949 generic.go:334] "Generic (PLEG): container finished" podID="d1ff69ad-f42e-4882-a580-c2fc212ab3a4" containerID="d83d45895b8ab8e4573ba4fee2d94606703a13c2a6437f78e9cc3391c8ec8859" exitCode=0 Jan 20 15:29:42 crc kubenswrapper[4949]: I0120 15:29:42.176856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" event={"ID":"d1ff69ad-f42e-4882-a580-c2fc212ab3a4","Type":"ContainerDied","Data":"d83d45895b8ab8e4573ba4fee2d94606703a13c2a6437f78e9cc3391c8ec8859"} Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.629358 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690461 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690573 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690638 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690676 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690759 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690782 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690842 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690897 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.691880 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.691948 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.691975 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.692061 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.692143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.700323 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.700406 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms" (OuterVolumeSpecName: "kube-api-access-9vbms") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "kube-api-access-9vbms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.701322 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.701396 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.701541 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.701805 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.702142 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph" (OuterVolumeSpecName: "ceph") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.702668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.703564 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.703675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.704848 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.730842 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.736805 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory" (OuterVolumeSpecName: "inventory") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795149 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795209 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795232 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795252 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795295 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795314 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795333 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795353 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795373 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795391 4949 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795409 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795426 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795445 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.198901 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" event={"ID":"d1ff69ad-f42e-4882-a580-c2fc212ab3a4","Type":"ContainerDied","Data":"2552adbd9467a9c5b9482358b1855362c05f66b637e8f2f5b2886982d235b0b5"} Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.198951 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2552adbd9467a9c5b9482358b1855362c05f66b637e8f2f5b2886982d235b0b5" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.198995 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.303227 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625"] Jan 20 15:29:44 crc kubenswrapper[4949]: E0120 15:29:44.303922 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ff69ad-f42e-4882-a580-c2fc212ab3a4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.303964 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ff69ad-f42e-4882-a580-c2fc212ab3a4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.304314 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ff69ad-f42e-4882-a580-c2fc212ab3a4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.305177 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.309860 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.310241 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.310899 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.311441 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.313956 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.321847 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625"] Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.407903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.408021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.408067 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.408094 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.510243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.510318 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.510353 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.510375 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.514957 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.515128 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.516480 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.530054 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.621394 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.971923 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625"] Jan 20 15:29:45 crc kubenswrapper[4949]: I0120 15:29:45.210862 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" event={"ID":"70d9d029-15fb-479a-b668-926d3167b179","Type":"ContainerStarted","Data":"1375055bad8df9d88a242faa9e275ca7ec0737f787a1baa1b269db756c3652cb"} Jan 20 15:29:46 crc kubenswrapper[4949]: I0120 15:29:46.221477 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" event={"ID":"70d9d029-15fb-479a-b668-926d3167b179","Type":"ContainerStarted","Data":"c174525433185192139fe513e1549a2527a5a51df56fbddd77ff87d5abc98489"} Jan 20 15:29:46 crc kubenswrapper[4949]: I0120 15:29:46.251893 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" podStartSLOduration=1.630685105 podStartE2EDuration="2.251859772s" podCreationTimestamp="2026-01-20 15:29:44 +0000 UTC" firstStartedPulling="2026-01-20 15:29:44.98407967 +0000 UTC m=+2380.793910528" lastFinishedPulling="2026-01-20 15:29:45.605254317 +0000 UTC m=+2381.415085195" observedRunningTime="2026-01-20 15:29:46.241240624 +0000 UTC m=+2382.051071502" watchObservedRunningTime="2026-01-20 15:29:46.251859772 +0000 UTC m=+2382.061690630" Jan 20 15:29:50 crc kubenswrapper[4949]: I0120 15:29:50.790138 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:50 crc kubenswrapper[4949]: E0120 15:29:50.790844 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:51 crc kubenswrapper[4949]: I0120 15:29:51.266863 4949 generic.go:334] "Generic (PLEG): container finished" podID="70d9d029-15fb-479a-b668-926d3167b179" containerID="c174525433185192139fe513e1549a2527a5a51df56fbddd77ff87d5abc98489" exitCode=0 Jan 20 15:29:51 crc kubenswrapper[4949]: I0120 15:29:51.266962 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" event={"ID":"70d9d029-15fb-479a-b668-926d3167b179","Type":"ContainerDied","Data":"c174525433185192139fe513e1549a2527a5a51df56fbddd77ff87d5abc98489"} Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.671470 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.775483 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") pod \"70d9d029-15fb-479a-b668-926d3167b179\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.775560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") pod \"70d9d029-15fb-479a-b668-926d3167b179\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.775676 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") pod \"70d9d029-15fb-479a-b668-926d3167b179\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.775746 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") pod \"70d9d029-15fb-479a-b668-926d3167b179\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.792267 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r" (OuterVolumeSpecName: "kube-api-access-ldn5r") pod "70d9d029-15fb-479a-b668-926d3167b179" (UID: "70d9d029-15fb-479a-b668-926d3167b179"). InnerVolumeSpecName "kube-api-access-ldn5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.792490 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph" (OuterVolumeSpecName: "ceph") pod "70d9d029-15fb-479a-b668-926d3167b179" (UID: "70d9d029-15fb-479a-b668-926d3167b179"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.804070 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory" (OuterVolumeSpecName: "inventory") pod "70d9d029-15fb-479a-b668-926d3167b179" (UID: "70d9d029-15fb-479a-b668-926d3167b179"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.811580 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70d9d029-15fb-479a-b668-926d3167b179" (UID: "70d9d029-15fb-479a-b668-926d3167b179"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.878018 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.878064 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.878078 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.878089 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.295859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" event={"ID":"70d9d029-15fb-479a-b668-926d3167b179","Type":"ContainerDied","Data":"1375055bad8df9d88a242faa9e275ca7ec0737f787a1baa1b269db756c3652cb"} Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.295899 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.295919 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1375055bad8df9d88a242faa9e275ca7ec0737f787a1baa1b269db756c3652cb" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.444156 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g"] Jan 20 15:29:53 crc kubenswrapper[4949]: E0120 15:29:53.444836 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d9d029-15fb-479a-b668-926d3167b179" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.444852 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d9d029-15fb-479a-b668-926d3167b179" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.445030 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d9d029-15fb-479a-b668-926d3167b179" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.445585 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.447711 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.447927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.447989 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.448708 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.458409 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.460256 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.482644 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g"] Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591319 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591375 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591418 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591445 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591605 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.592011 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693295 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693450 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693490 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693568 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693603 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693640 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.696088 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.699561 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.699991 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.700213 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.701937 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.711013 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.781625 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:54 crc kubenswrapper[4949]: I0120 15:29:54.287669 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g"] Jan 20 15:29:54 crc kubenswrapper[4949]: I0120 15:29:54.313154 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" event={"ID":"eb1d8e10-2c84-4a8f-a3d0-653432297fb1","Type":"ContainerStarted","Data":"9536fad1bacc9034cf76c961a09438499cdeeda973474821cb5591e72cc72834"} Jan 20 15:29:56 crc kubenswrapper[4949]: I0120 15:29:56.329891 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" event={"ID":"eb1d8e10-2c84-4a8f-a3d0-653432297fb1","Type":"ContainerStarted","Data":"3152a6154b59e0b2b5b0abe67975843fee7ee71b269eab67e69e5bcf08cc4787"} Jan 20 15:29:56 crc kubenswrapper[4949]: I0120 15:29:56.356465 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" podStartSLOduration=1.999779437 podStartE2EDuration="3.356446391s" podCreationTimestamp="2026-01-20 15:29:53 +0000 UTC" firstStartedPulling="2026-01-20 15:29:54.294666031 +0000 UTC m=+2390.104496889" lastFinishedPulling="2026-01-20 15:29:55.651332945 +0000 UTC m=+2391.461163843" observedRunningTime="2026-01-20 15:29:56.350231299 +0000 UTC m=+2392.160062157" watchObservedRunningTime="2026-01-20 15:29:56.356446391 +0000 UTC m=+2392.166277249" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.140312 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf"] Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.142118 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.144723 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.146806 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.160932 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf"] Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.323044 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.323145 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.323176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.425819 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.425929 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.425965 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.427455 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.435127 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.445267 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.480706 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.917415 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf"] Jan 20 15:30:01 crc kubenswrapper[4949]: I0120 15:30:01.374366 4949 generic.go:334] "Generic (PLEG): container finished" podID="a4f7e2a1-deca-4c82-928a-bab4bd7d6620" containerID="88f113a7b6fc797eb7fe514267bced55439c2fafac70f54e2c53c63c02e7a5c5" exitCode=0 Jan 20 15:30:01 crc kubenswrapper[4949]: I0120 15:30:01.374408 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" event={"ID":"a4f7e2a1-deca-4c82-928a-bab4bd7d6620","Type":"ContainerDied","Data":"88f113a7b6fc797eb7fe514267bced55439c2fafac70f54e2c53c63c02e7a5c5"} Jan 20 15:30:01 crc kubenswrapper[4949]: I0120 15:30:01.374457 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" event={"ID":"a4f7e2a1-deca-4c82-928a-bab4bd7d6620","Type":"ContainerStarted","Data":"c6737d5d40de6f9890a4c013ae19fa94b25faa296b568ba4a82dd3dc2c9ab5a0"} Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.733237 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.874420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") pod \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.874488 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") pod \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.874658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") pod \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.875347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4f7e2a1-deca-4c82-928a-bab4bd7d6620" (UID: "a4f7e2a1-deca-4c82-928a-bab4bd7d6620"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.875623 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.880695 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4f7e2a1-deca-4c82-928a-bab4bd7d6620" (UID: "a4f7e2a1-deca-4c82-928a-bab4bd7d6620"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.880775 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2" (OuterVolumeSpecName: "kube-api-access-df6b2") pod "a4f7e2a1-deca-4c82-928a-bab4bd7d6620" (UID: "a4f7e2a1-deca-4c82-928a-bab4bd7d6620"). InnerVolumeSpecName "kube-api-access-df6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.977161 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.977214 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") on node \"crc\" DevicePath \"\"" Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.392573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" event={"ID":"a4f7e2a1-deca-4c82-928a-bab4bd7d6620","Type":"ContainerDied","Data":"c6737d5d40de6f9890a4c013ae19fa94b25faa296b568ba4a82dd3dc2c9ab5a0"} Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.392900 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6737d5d40de6f9890a4c013ae19fa94b25faa296b568ba4a82dd3dc2c9ab5a0" Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.392648 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.790582 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:03 crc kubenswrapper[4949]: E0120 15:30:03.790825 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.814095 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.822668 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 15:30:04 crc kubenswrapper[4949]: I0120 15:30:04.800442 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" path="/var/lib/kubelet/pods/8c06ab34-4b4e-4047-b32d-e9d36c792b1d/volumes" Jan 20 15:30:15 crc kubenswrapper[4949]: I0120 15:30:15.788910 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:15 crc kubenswrapper[4949]: E0120 15:30:15.789751 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:30:18 crc kubenswrapper[4949]: I0120 15:30:18.298455 4949 scope.go:117] "RemoveContainer" containerID="f7ccf61b1b533eee3af51392be86e3fc038d228c29c868fd9df44638391dd3bf" Jan 20 15:30:28 crc kubenswrapper[4949]: I0120 15:30:28.789475 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:28 crc kubenswrapper[4949]: E0120 15:30:28.790253 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:30:42 crc kubenswrapper[4949]: I0120 15:30:42.789675 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:42 crc kubenswrapper[4949]: E0120 15:30:42.790488 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:30:56 crc kubenswrapper[4949]: I0120 15:30:56.789743 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:56 crc kubenswrapper[4949]: E0120 15:30:56.790631 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:31:11 crc kubenswrapper[4949]: I0120 15:31:11.789353 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:31:11 crc kubenswrapper[4949]: E0120 15:31:11.790018 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:31:13 crc kubenswrapper[4949]: E0120 15:31:13.307565 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb1d8e10_2c84_4a8f_a3d0_653432297fb1.slice/crio-conmon-3152a6154b59e0b2b5b0abe67975843fee7ee71b269eab67e69e5bcf08cc4787.scope\": RecentStats: unable to find data in memory cache]" Jan 20 15:31:14 crc kubenswrapper[4949]: I0120 15:31:14.040458 4949 generic.go:334] "Generic (PLEG): container finished" podID="eb1d8e10-2c84-4a8f-a3d0-653432297fb1" containerID="3152a6154b59e0b2b5b0abe67975843fee7ee71b269eab67e69e5bcf08cc4787" exitCode=0 Jan 20 15:31:14 crc kubenswrapper[4949]: I0120 15:31:14.040739 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" event={"ID":"eb1d8e10-2c84-4a8f-a3d0-653432297fb1","Type":"ContainerDied","Data":"3152a6154b59e0b2b5b0abe67975843fee7ee71b269eab67e69e5bcf08cc4787"} Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.421287 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553346 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553603 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553634 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553672 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553708 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.560961 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.562162 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph" (OuterVolumeSpecName: "ceph") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.564429 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs" (OuterVolumeSpecName: "kube-api-access-4mhrs") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "kube-api-access-4mhrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.585856 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory" (OuterVolumeSpecName: "inventory") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.586449 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.596788 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656165 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656207 4949 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656217 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656225 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656233 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656242 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.062266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" event={"ID":"eb1d8e10-2c84-4a8f-a3d0-653432297fb1","Type":"ContainerDied","Data":"9536fad1bacc9034cf76c961a09438499cdeeda973474821cb5591e72cc72834"} Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.062328 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.062347 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9536fad1bacc9034cf76c961a09438499cdeeda973474821cb5591e72cc72834" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.161564 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf"] Jan 20 15:31:16 crc kubenswrapper[4949]: E0120 15:31:16.162112 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f7e2a1-deca-4c82-928a-bab4bd7d6620" containerName="collect-profiles" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.162132 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f7e2a1-deca-4c82-928a-bab4bd7d6620" containerName="collect-profiles" Jan 20 15:31:16 crc kubenswrapper[4949]: E0120 15:31:16.162156 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1d8e10-2c84-4a8f-a3d0-653432297fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.162165 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1d8e10-2c84-4a8f-a3d0-653432297fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.162445 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1d8e10-2c84-4a8f-a3d0-653432297fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.162480 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f7e2a1-deca-4c82-928a-bab4bd7d6620" containerName="collect-profiles" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.163113 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.165925 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.166117 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.166209 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.167156 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.169494 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.170291 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.174393 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.188644 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf"] Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267470 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267552 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267584 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267610 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267645 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267666 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267710 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.368948 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369008 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369038 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369075 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369099 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369143 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369194 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.374442 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.381548 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.381809 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.382159 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.382267 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.383206 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.384754 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.481756 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:17 crc kubenswrapper[4949]: I0120 15:31:17.013538 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf"] Jan 20 15:31:17 crc kubenswrapper[4949]: I0120 15:31:17.074648 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" event={"ID":"a6c12b14-7d12-46ea-be9c-15789d700112","Type":"ContainerStarted","Data":"958a6ad4ddf1df7b835d62486cb75542552b1f8e543ba46f928986491ecb2fbf"} Jan 20 15:31:18 crc kubenswrapper[4949]: I0120 15:31:18.082628 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" event={"ID":"a6c12b14-7d12-46ea-be9c-15789d700112","Type":"ContainerStarted","Data":"9e6bd56465eba918050013087e291973ba2f51b53db86a1b50dea1710cedc6c7"} Jan 20 15:31:18 crc kubenswrapper[4949]: I0120 15:31:18.107568 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" podStartSLOduration=1.377738758 podStartE2EDuration="2.107547409s" podCreationTimestamp="2026-01-20 15:31:16 +0000 UTC" firstStartedPulling="2026-01-20 15:31:17.015016381 +0000 UTC m=+2472.824847239" lastFinishedPulling="2026-01-20 15:31:17.744825022 +0000 UTC m=+2473.554655890" observedRunningTime="2026-01-20 15:31:18.10076836 +0000 UTC m=+2473.910599218" watchObservedRunningTime="2026-01-20 15:31:18.107547409 +0000 UTC m=+2473.917378267" Jan 20 15:31:24 crc kubenswrapper[4949]: I0120 15:31:24.804573 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:31:24 crc kubenswrapper[4949]: E0120 15:31:24.805129 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:31:37 crc kubenswrapper[4949]: I0120 15:31:37.789375 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:31:37 crc kubenswrapper[4949]: E0120 15:31:37.791148 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:31:51 crc kubenswrapper[4949]: I0120 15:31:51.789483 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:31:51 crc kubenswrapper[4949]: E0120 15:31:51.790429 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:32:04 crc kubenswrapper[4949]: I0120 15:32:04.796863 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:32:05 crc kubenswrapper[4949]: I0120 15:32:05.494283 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea"} Jan 20 15:32:26 crc kubenswrapper[4949]: I0120 15:32:26.679797 4949 generic.go:334] "Generic (PLEG): container finished" podID="a6c12b14-7d12-46ea-be9c-15789d700112" containerID="9e6bd56465eba918050013087e291973ba2f51b53db86a1b50dea1710cedc6c7" exitCode=0 Jan 20 15:32:26 crc kubenswrapper[4949]: I0120 15:32:26.679882 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" event={"ID":"a6c12b14-7d12-46ea-be9c-15789d700112","Type":"ContainerDied","Data":"9e6bd56465eba918050013087e291973ba2f51b53db86a1b50dea1710cedc6c7"} Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.231529 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311618 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311711 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311778 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311806 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311961 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.312054 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.312099 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.318716 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x" (OuterVolumeSpecName: "kube-api-access-gsr7x") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "kube-api-access-gsr7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.318742 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph" (OuterVolumeSpecName: "ceph") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.319282 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.340058 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory" (OuterVolumeSpecName: "inventory") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.345766 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.348143 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.350963 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414070 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414119 4949 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414137 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414152 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414167 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414184 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414195 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.699975 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" event={"ID":"a6c12b14-7d12-46ea-be9c-15789d700112","Type":"ContainerDied","Data":"958a6ad4ddf1df7b835d62486cb75542552b1f8e543ba46f928986491ecb2fbf"} Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.700407 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="958a6ad4ddf1df7b835d62486cb75542552b1f8e543ba46f928986491ecb2fbf" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.700020 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.847201 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns"] Jan 20 15:32:28 crc kubenswrapper[4949]: E0120 15:32:28.847996 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c12b14-7d12-46ea-be9c-15789d700112" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.848022 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c12b14-7d12-46ea-be9c-15789d700112" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.848573 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c12b14-7d12-46ea-be9c-15789d700112" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.849637 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.854273 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.854506 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.854609 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.854846 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.855246 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.855244 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.857646 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns"] Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063479 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063633 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063785 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063873 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063918 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.164954 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165005 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165037 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165185 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165226 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.169653 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.169974 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.170033 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.171707 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.171981 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.181222 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.479570 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:30 crc kubenswrapper[4949]: I0120 15:32:30.059149 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns"] Jan 20 15:32:30 crc kubenswrapper[4949]: W0120 15:32:30.063318 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccd4282a_7ba2_4eda_9078_00d3f0ff58c4.slice/crio-31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f WatchSource:0}: Error finding container 31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f: Status 404 returned error can't find the container with id 31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f Jan 20 15:32:30 crc kubenswrapper[4949]: I0120 15:32:30.716694 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" event={"ID":"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4","Type":"ContainerStarted","Data":"31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f"} Jan 20 15:32:31 crc kubenswrapper[4949]: I0120 15:32:31.727244 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" event={"ID":"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4","Type":"ContainerStarted","Data":"43cfd0eb10ad3989bcad841d90f1f6cdaa4d3595269e195d20ba1d514e80e53f"} Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.691324 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" podStartSLOduration=92.03315455 podStartE2EDuration="1m32.691300963s" podCreationTimestamp="2026-01-20 15:32:28 +0000 UTC" firstStartedPulling="2026-01-20 15:32:30.066256105 +0000 UTC m=+2545.876086963" lastFinishedPulling="2026-01-20 15:32:30.724402518 +0000 UTC m=+2546.534233376" observedRunningTime="2026-01-20 15:32:31.752897063 +0000 UTC m=+2547.562727951" watchObservedRunningTime="2026-01-20 15:34:00.691300963 +0000 UTC m=+2636.501131821" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.698813 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.701718 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.709580 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.726546 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.727839 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.728146 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.830661 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.831019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.831172 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.831753 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.831869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.853343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:01 crc kubenswrapper[4949]: I0120 15:34:01.026061 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:01 crc kubenswrapper[4949]: I0120 15:34:01.499797 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:01 crc kubenswrapper[4949]: I0120 15:34:01.610622 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerStarted","Data":"5ca3f3b3d5d28fc595329025f84cb3ba0155bfb1f85916545dd5b6c003b93b77"} Jan 20 15:34:02 crc kubenswrapper[4949]: I0120 15:34:02.620193 4949 generic.go:334] "Generic (PLEG): container finished" podID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerID="4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7" exitCode=0 Jan 20 15:34:02 crc kubenswrapper[4949]: I0120 15:34:02.620306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerDied","Data":"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7"} Jan 20 15:34:02 crc kubenswrapper[4949]: I0120 15:34:02.622740 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:34:04 crc kubenswrapper[4949]: I0120 15:34:04.642946 4949 generic.go:334] "Generic (PLEG): container finished" podID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerID="a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1" exitCode=0 Jan 20 15:34:04 crc kubenswrapper[4949]: I0120 15:34:04.643054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerDied","Data":"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1"} Jan 20 15:34:06 crc kubenswrapper[4949]: I0120 15:34:06.678745 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerStarted","Data":"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f"} Jan 20 15:34:06 crc kubenswrapper[4949]: I0120 15:34:06.709011 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ln9wm" podStartSLOduration=3.75691968 podStartE2EDuration="6.708986237s" podCreationTimestamp="2026-01-20 15:34:00 +0000 UTC" firstStartedPulling="2026-01-20 15:34:02.622508073 +0000 UTC m=+2638.432338931" lastFinishedPulling="2026-01-20 15:34:05.57457463 +0000 UTC m=+2641.384405488" observedRunningTime="2026-01-20 15:34:06.702427734 +0000 UTC m=+2642.512258592" watchObservedRunningTime="2026-01-20 15:34:06.708986237 +0000 UTC m=+2642.518817095" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.026933 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.027748 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.102269 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.763065 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.822430 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:13 crc kubenswrapper[4949]: I0120 15:34:13.733807 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ln9wm" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="registry-server" containerID="cri-o://0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" gracePeriod=2 Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.204290 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.309907 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") pod \"83ae6c9a-a314-4dc0-9859-1febb6555498\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.310010 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") pod \"83ae6c9a-a314-4dc0-9859-1febb6555498\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.310252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") pod \"83ae6c9a-a314-4dc0-9859-1febb6555498\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.311091 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities" (OuterVolumeSpecName: "utilities") pod "83ae6c9a-a314-4dc0-9859-1febb6555498" (UID: "83ae6c9a-a314-4dc0-9859-1febb6555498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.324672 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc" (OuterVolumeSpecName: "kube-api-access-5jdxc") pod "83ae6c9a-a314-4dc0-9859-1febb6555498" (UID: "83ae6c9a-a314-4dc0-9859-1febb6555498"). InnerVolumeSpecName "kube-api-access-5jdxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.333311 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83ae6c9a-a314-4dc0-9859-1febb6555498" (UID: "83ae6c9a-a314-4dc0-9859-1febb6555498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.412327 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.412382 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") on node \"crc\" DevicePath \"\"" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.412394 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744013 4949 generic.go:334] "Generic (PLEG): container finished" podID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerID="0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" exitCode=0 Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744313 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744187 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerDied","Data":"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f"} Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744443 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerDied","Data":"5ca3f3b3d5d28fc595329025f84cb3ba0155bfb1f85916545dd5b6c003b93b77"} Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744470 4949 scope.go:117] "RemoveContainer" containerID="0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.776843 4949 scope.go:117] "RemoveContainer" containerID="a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.810258 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.810327 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.819706 4949 scope.go:117] "RemoveContainer" containerID="4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.848101 4949 scope.go:117] "RemoveContainer" containerID="0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" Jan 20 15:34:14 crc kubenswrapper[4949]: E0120 15:34:14.848927 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f\": container with ID starting with 0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f not found: ID does not exist" containerID="0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.848985 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f"} err="failed to get container status \"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f\": rpc error: code = NotFound desc = could not find container \"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f\": container with ID starting with 0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f not found: ID does not exist" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.849021 4949 scope.go:117] "RemoveContainer" containerID="a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1" Jan 20 15:34:14 crc kubenswrapper[4949]: E0120 15:34:14.850248 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1\": container with ID starting with a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1 not found: ID does not exist" containerID="a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.850302 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1"} err="failed to get container status \"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1\": rpc error: code = NotFound desc = could not find container \"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1\": container with ID starting with a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1 not found: ID does not exist" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.850333 4949 scope.go:117] "RemoveContainer" containerID="4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7" Jan 20 15:34:14 crc kubenswrapper[4949]: E0120 15:34:14.850781 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7\": container with ID starting with 4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7 not found: ID does not exist" containerID="4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.850840 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7"} err="failed to get container status \"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7\": rpc error: code = NotFound desc = could not find container \"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7\": container with ID starting with 4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7 not found: ID does not exist" Jan 20 15:34:16 crc kubenswrapper[4949]: I0120 15:34:16.802082 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" path="/var/lib/kubelet/pods/83ae6c9a-a314-4dc0-9859-1febb6555498/volumes" Jan 20 15:34:27 crc kubenswrapper[4949]: I0120 15:34:27.152501 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:34:27 crc kubenswrapper[4949]: I0120 15:34:27.152979 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.152385 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.153085 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.919151 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:34:57 crc kubenswrapper[4949]: E0120 15:34:57.920002 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="extract-utilities" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.920028 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="extract-utilities" Jan 20 15:34:57 crc kubenswrapper[4949]: E0120 15:34:57.920064 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="extract-content" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.920073 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="extract-content" Jan 20 15:34:57 crc kubenswrapper[4949]: E0120 15:34:57.920090 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="registry-server" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.920099 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="registry-server" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.920303 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="registry-server" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.921807 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.939902 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.054427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.054561 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.054593 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.155821 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.155875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.155941 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.156376 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.158617 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.175511 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.242582 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.825398 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:34:59 crc kubenswrapper[4949]: I0120 15:34:59.157583 4949 generic.go:334] "Generic (PLEG): container finished" podID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerID="b25eb601db495762aa2c1dce730d0bc786cef26614edc2df7ad8c09198618acd" exitCode=0 Jan 20 15:34:59 crc kubenswrapper[4949]: I0120 15:34:59.157724 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerDied","Data":"b25eb601db495762aa2c1dce730d0bc786cef26614edc2df7ad8c09198618acd"} Jan 20 15:34:59 crc kubenswrapper[4949]: I0120 15:34:59.157864 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerStarted","Data":"e222a46ace7165c925fc4bbde5985238a248b2ea1528cbed31f9f276ca123e73"} Jan 20 15:35:01 crc kubenswrapper[4949]: I0120 15:35:01.174666 4949 generic.go:334] "Generic (PLEG): container finished" podID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerID="76a5595e5cd26fffaa0ceb9cde98dcd008151ee5eae0290b95de7858f3eded5f" exitCode=0 Jan 20 15:35:01 crc kubenswrapper[4949]: I0120 15:35:01.174775 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerDied","Data":"76a5595e5cd26fffaa0ceb9cde98dcd008151ee5eae0290b95de7858f3eded5f"} Jan 20 15:35:03 crc kubenswrapper[4949]: I0120 15:35:03.202673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerStarted","Data":"0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25"} Jan 20 15:35:03 crc kubenswrapper[4949]: I0120 15:35:03.224394 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4h62x" podStartSLOduration=3.315814551 podStartE2EDuration="6.224370967s" podCreationTimestamp="2026-01-20 15:34:57 +0000 UTC" firstStartedPulling="2026-01-20 15:34:59.159048515 +0000 UTC m=+2694.968879373" lastFinishedPulling="2026-01-20 15:35:02.067604911 +0000 UTC m=+2697.877435789" observedRunningTime="2026-01-20 15:35:03.217485544 +0000 UTC m=+2699.027316412" watchObservedRunningTime="2026-01-20 15:35:03.224370967 +0000 UTC m=+2699.034201825" Jan 20 15:35:08 crc kubenswrapper[4949]: I0120 15:35:08.243583 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:08 crc kubenswrapper[4949]: I0120 15:35:08.244575 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:08 crc kubenswrapper[4949]: I0120 15:35:08.316851 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:09 crc kubenswrapper[4949]: I0120 15:35:09.304489 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:09 crc kubenswrapper[4949]: I0120 15:35:09.362090 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:35:11 crc kubenswrapper[4949]: I0120 15:35:11.276837 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4h62x" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="registry-server" containerID="cri-o://0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25" gracePeriod=2 Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.291437 4949 generic.go:334] "Generic (PLEG): container finished" podID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerID="0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25" exitCode=0 Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.291524 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerDied","Data":"0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25"} Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.292208 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerDied","Data":"e222a46ace7165c925fc4bbde5985238a248b2ea1528cbed31f9f276ca123e73"} Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.292229 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e222a46ace7165c925fc4bbde5985238a248b2ea1528cbed31f9f276ca123e73" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.354871 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.459042 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") pod \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.459204 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") pod \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.459265 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") pod \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.460024 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities" (OuterVolumeSpecName: "utilities") pod "94bdc9ae-4946-48f5-8aa5-15a138c85b14" (UID: "94bdc9ae-4946-48f5-8aa5-15a138c85b14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.465721 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj" (OuterVolumeSpecName: "kube-api-access-b6zbj") pod "94bdc9ae-4946-48f5-8aa5-15a138c85b14" (UID: "94bdc9ae-4946-48f5-8aa5-15a138c85b14"). InnerVolumeSpecName "kube-api-access-b6zbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.522128 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94bdc9ae-4946-48f5-8aa5-15a138c85b14" (UID: "94bdc9ae-4946-48f5-8aa5-15a138c85b14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.561407 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") on node \"crc\" DevicePath \"\"" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.561463 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.561475 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:35:13 crc kubenswrapper[4949]: I0120 15:35:13.298755 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:13 crc kubenswrapper[4949]: I0120 15:35:13.321569 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:35:13 crc kubenswrapper[4949]: I0120 15:35:13.328973 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:35:14 crc kubenswrapper[4949]: I0120 15:35:14.798128 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" path="/var/lib/kubelet/pods/94bdc9ae-4946-48f5-8aa5-15a138c85b14/volumes" Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.152508 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.153176 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.153225 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.154132 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.154238 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea" gracePeriod=600 Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.429882 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea" exitCode=0 Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.429969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea"} Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.430281 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:35:28 crc kubenswrapper[4949]: I0120 15:35:28.442178 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17"} Jan 20 15:37:18 crc kubenswrapper[4949]: I0120 15:37:18.433486 4949 generic.go:334] "Generic (PLEG): container finished" podID="ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" containerID="43cfd0eb10ad3989bcad841d90f1f6cdaa4d3595269e195d20ba1d514e80e53f" exitCode=0 Jan 20 15:37:18 crc kubenswrapper[4949]: I0120 15:37:18.433575 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" event={"ID":"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4","Type":"ContainerDied","Data":"43cfd0eb10ad3989bcad841d90f1f6cdaa4d3595269e195d20ba1d514e80e53f"} Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.897198 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993429 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993596 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993619 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993671 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993774 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.999592 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l" (OuterVolumeSpecName: "kube-api-access-wnx8l") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "kube-api-access-wnx8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.999600 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph" (OuterVolumeSpecName: "ceph") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.003872 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.042135 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.045852 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.046415 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory" (OuterVolumeSpecName: "inventory") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096140 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096181 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096195 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096206 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096219 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096230 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.457051 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" event={"ID":"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4","Type":"ContainerDied","Data":"31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f"} Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.457094 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.457134 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.582366 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff"] Jan 20 15:37:20 crc kubenswrapper[4949]: E0120 15:37:20.582809 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="extract-content" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583020 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="extract-content" Jan 20 15:37:20 crc kubenswrapper[4949]: E0120 15:37:20.583041 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583051 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 15:37:20 crc kubenswrapper[4949]: E0120 15:37:20.583081 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="extract-utilities" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583090 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="extract-utilities" Jan 20 15:37:20 crc kubenswrapper[4949]: E0120 15:37:20.583107 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="registry-server" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583118 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="registry-server" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583363 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="registry-server" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583404 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.584148 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596069 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596336 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596538 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596721 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596973 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.597169 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.597338 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.597503 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.597670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.611881 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff"] Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707352 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707388 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707532 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707671 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707745 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707916 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.708017 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.708047 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809194 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809269 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809347 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809404 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809434 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809468 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809489 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809559 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809607 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809658 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.810435 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.810867 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.814623 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.815209 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.815453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.816045 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.816103 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.818174 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.818657 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.828070 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.829130 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.913551 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:21 crc kubenswrapper[4949]: I0120 15:37:21.455943 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff"] Jan 20 15:37:22 crc kubenswrapper[4949]: I0120 15:37:22.479597 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" event={"ID":"97b58b41-5a8f-47f7-af93-382d7a6f0e69","Type":"ContainerStarted","Data":"c6f70da926b771c5d1c2f1ccd50cc7324ccd775fb73876066b3a4d6d02b7e43a"} Jan 20 15:37:22 crc kubenswrapper[4949]: I0120 15:37:22.480002 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" event={"ID":"97b58b41-5a8f-47f7-af93-382d7a6f0e69","Type":"ContainerStarted","Data":"311b2a18e1d8378252caca1377c9d806a5e7a75e15f5a57cd03a24147cb2b537"} Jan 20 15:37:22 crc kubenswrapper[4949]: I0120 15:37:22.509723 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" podStartSLOduration=2.062464891 podStartE2EDuration="2.509697072s" podCreationTimestamp="2026-01-20 15:37:20 +0000 UTC" firstStartedPulling="2026-01-20 15:37:21.463617508 +0000 UTC m=+2837.273448366" lastFinishedPulling="2026-01-20 15:37:21.910849679 +0000 UTC m=+2837.720680547" observedRunningTime="2026-01-20 15:37:22.501919833 +0000 UTC m=+2838.311750701" watchObservedRunningTime="2026-01-20 15:37:22.509697072 +0000 UTC m=+2838.319527930" Jan 20 15:37:27 crc kubenswrapper[4949]: I0120 15:37:27.151965 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:37:27 crc kubenswrapper[4949]: I0120 15:37:27.152510 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:37:57 crc kubenswrapper[4949]: I0120 15:37:57.152325 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:37:57 crc kubenswrapper[4949]: I0120 15:37:57.155197 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.152136 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.152731 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.152779 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.153548 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.153598 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" gracePeriod=600 Jan 20 15:38:27 crc kubenswrapper[4949]: E0120 15:38:27.372143 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:38:28 crc kubenswrapper[4949]: I0120 15:38:28.074429 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" exitCode=0 Jan 20 15:38:28 crc kubenswrapper[4949]: I0120 15:38:28.074538 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17"} Jan 20 15:38:28 crc kubenswrapper[4949]: I0120 15:38:28.074824 4949 scope.go:117] "RemoveContainer" containerID="42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea" Jan 20 15:38:28 crc kubenswrapper[4949]: I0120 15:38:28.075507 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:38:28 crc kubenswrapper[4949]: E0120 15:38:28.075835 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:38:40 crc kubenswrapper[4949]: I0120 15:38:40.788884 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:38:40 crc kubenswrapper[4949]: E0120 15:38:40.789833 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:38:52 crc kubenswrapper[4949]: I0120 15:38:52.789262 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:38:52 crc kubenswrapper[4949]: E0120 15:38:52.790365 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:39:05 crc kubenswrapper[4949]: I0120 15:39:05.789129 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:39:05 crc kubenswrapper[4949]: E0120 15:39:05.789976 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.535260 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.539212 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.562215 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.642730 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.642835 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.642922 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.745989 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.746216 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.746318 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.746992 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.747048 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.770946 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.876993 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:18 crc kubenswrapper[4949]: I0120 15:39:18.402382 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:18 crc kubenswrapper[4949]: I0120 15:39:18.553655 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerStarted","Data":"a94e74f233c2ad8e5fbd2dcc0d41236255c1ea5d5a8af535972972a863634d26"} Jan 20 15:39:19 crc kubenswrapper[4949]: I0120 15:39:19.566302 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerID="9f27516f93a82a5ec8321727892970bbe72577d3f4d730422035ab9d2694235d" exitCode=0 Jan 20 15:39:19 crc kubenswrapper[4949]: I0120 15:39:19.566369 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerDied","Data":"9f27516f93a82a5ec8321727892970bbe72577d3f4d730422035ab9d2694235d"} Jan 20 15:39:19 crc kubenswrapper[4949]: I0120 15:39:19.573040 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:39:20 crc kubenswrapper[4949]: I0120 15:39:20.578225 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerStarted","Data":"1d02ae655afde2533b1b1facf993b80be8aa0aeec50e734f1d71045e112da8bd"} Jan 20 15:39:20 crc kubenswrapper[4949]: I0120 15:39:20.789263 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:39:20 crc kubenswrapper[4949]: E0120 15:39:20.789545 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.382231 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.384856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.401948 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.533721 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.533787 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.533883 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.590085 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerDied","Data":"1d02ae655afde2533b1b1facf993b80be8aa0aeec50e734f1d71045e112da8bd"} Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.589938 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerID="1d02ae655afde2533b1b1facf993b80be8aa0aeec50e734f1d71045e112da8bd" exitCode=0 Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.635635 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.635832 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.635866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.636251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.636805 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.666753 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.713286 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:22 crc kubenswrapper[4949]: I0120 15:39:22.210028 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:22 crc kubenswrapper[4949]: W0120 15:39:22.218494 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f342955_4a85_4515_a30f_4df633975c84.slice/crio-a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0 WatchSource:0}: Error finding container a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0: Status 404 returned error can't find the container with id a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0 Jan 20 15:39:22 crc kubenswrapper[4949]: I0120 15:39:22.603185 4949 generic.go:334] "Generic (PLEG): container finished" podID="1f342955-4a85-4515-a30f-4df633975c84" containerID="b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544" exitCode=0 Jan 20 15:39:22 crc kubenswrapper[4949]: I0120 15:39:22.603246 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerDied","Data":"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544"} Jan 20 15:39:22 crc kubenswrapper[4949]: I0120 15:39:22.603277 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerStarted","Data":"a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0"} Jan 20 15:39:24 crc kubenswrapper[4949]: I0120 15:39:24.622607 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerStarted","Data":"c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c"} Jan 20 15:39:24 crc kubenswrapper[4949]: I0120 15:39:24.632972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerStarted","Data":"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0"} Jan 20 15:39:24 crc kubenswrapper[4949]: I0120 15:39:24.647243 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dsht" podStartSLOduration=3.480564975 podStartE2EDuration="7.647226325s" podCreationTimestamp="2026-01-20 15:39:17 +0000 UTC" firstStartedPulling="2026-01-20 15:39:19.572610661 +0000 UTC m=+2955.382441559" lastFinishedPulling="2026-01-20 15:39:23.739272051 +0000 UTC m=+2959.549102909" observedRunningTime="2026-01-20 15:39:24.646284986 +0000 UTC m=+2960.456115844" watchObservedRunningTime="2026-01-20 15:39:24.647226325 +0000 UTC m=+2960.457057183" Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.662052 4949 generic.go:334] "Generic (PLEG): container finished" podID="1f342955-4a85-4515-a30f-4df633975c84" containerID="4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0" exitCode=0 Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.662160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerDied","Data":"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0"} Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.877479 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.877792 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.922898 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:29 crc kubenswrapper[4949]: I0120 15:39:29.683572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerStarted","Data":"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291"} Jan 20 15:39:29 crc kubenswrapper[4949]: I0120 15:39:29.710464 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zlpwm" podStartSLOduration=2.518769884 podStartE2EDuration="8.71043884s" podCreationTimestamp="2026-01-20 15:39:21 +0000 UTC" firstStartedPulling="2026-01-20 15:39:22.605505652 +0000 UTC m=+2958.415336520" lastFinishedPulling="2026-01-20 15:39:28.797174608 +0000 UTC m=+2964.607005476" observedRunningTime="2026-01-20 15:39:29.707027752 +0000 UTC m=+2965.516858630" watchObservedRunningTime="2026-01-20 15:39:29.71043884 +0000 UTC m=+2965.520269708" Jan 20 15:39:29 crc kubenswrapper[4949]: I0120 15:39:29.737242 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:31 crc kubenswrapper[4949]: I0120 15:39:31.714341 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:31 crc kubenswrapper[4949]: I0120 15:39:31.714974 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.522090 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.522403 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9dsht" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="registry-server" containerID="cri-o://c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c" gracePeriod=2 Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.710618 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerID="c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c" exitCode=0 Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.710710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerDied","Data":"c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c"} Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.765402 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zlpwm" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" probeResult="failure" output=< Jan 20 15:39:32 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:39:32 crc kubenswrapper[4949]: > Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.961909 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.097797 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") pod \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.097906 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") pod \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.097937 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") pod \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.098752 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities" (OuterVolumeSpecName: "utilities") pod "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" (UID: "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.106893 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv" (OuterVolumeSpecName: "kube-api-access-2kzkv") pod "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" (UID: "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058"). InnerVolumeSpecName "kube-api-access-2kzkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.153885 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" (UID: "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.199801 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.199836 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.199849 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.723737 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerDied","Data":"a94e74f233c2ad8e5fbd2dcc0d41236255c1ea5d5a8af535972972a863634d26"} Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.723794 4949 scope.go:117] "RemoveContainer" containerID="c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.723815 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.756265 4949 scope.go:117] "RemoveContainer" containerID="1d02ae655afde2533b1b1facf993b80be8aa0aeec50e734f1d71045e112da8bd" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.760285 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.770434 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.778661 4949 scope.go:117] "RemoveContainer" containerID="9f27516f93a82a5ec8321727892970bbe72577d3f4d730422035ab9d2694235d" Jan 20 15:39:34 crc kubenswrapper[4949]: I0120 15:39:34.796131 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:39:34 crc kubenswrapper[4949]: E0120 15:39:34.796908 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:39:34 crc kubenswrapper[4949]: I0120 15:39:34.800304 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" path="/var/lib/kubelet/pods/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058/volumes" Jan 20 15:39:41 crc kubenswrapper[4949]: I0120 15:39:41.767957 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:41 crc kubenswrapper[4949]: I0120 15:39:41.842686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:42 crc kubenswrapper[4949]: I0120 15:39:42.013374 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:42 crc kubenswrapper[4949]: I0120 15:39:42.816384 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zlpwm" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" containerID="cri-o://06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" gracePeriod=2 Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.313435 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.398805 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") pod \"1f342955-4a85-4515-a30f-4df633975c84\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.398905 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") pod \"1f342955-4a85-4515-a30f-4df633975c84\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.399121 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") pod \"1f342955-4a85-4515-a30f-4df633975c84\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.400291 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities" (OuterVolumeSpecName: "utilities") pod "1f342955-4a85-4515-a30f-4df633975c84" (UID: "1f342955-4a85-4515-a30f-4df633975c84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.410095 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz" (OuterVolumeSpecName: "kube-api-access-s9vtz") pod "1f342955-4a85-4515-a30f-4df633975c84" (UID: "1f342955-4a85-4515-a30f-4df633975c84"). InnerVolumeSpecName "kube-api-access-s9vtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.501822 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.501857 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.556909 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f342955-4a85-4515-a30f-4df633975c84" (UID: "1f342955-4a85-4515-a30f-4df633975c84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.603209 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825687 4949 generic.go:334] "Generic (PLEG): container finished" podID="1f342955-4a85-4515-a30f-4df633975c84" containerID="06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" exitCode=0 Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825736 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerDied","Data":"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291"} Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825765 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerDied","Data":"a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0"} Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825785 4949 scope.go:117] "RemoveContainer" containerID="06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825741 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.848870 4949 scope.go:117] "RemoveContainer" containerID="4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.865340 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.878425 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.887500 4949 scope.go:117] "RemoveContainer" containerID="b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915155 4949 scope.go:117] "RemoveContainer" containerID="06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" Jan 20 15:39:43 crc kubenswrapper[4949]: E0120 15:39:43.915564 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291\": container with ID starting with 06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291 not found: ID does not exist" containerID="06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915599 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291"} err="failed to get container status \"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291\": rpc error: code = NotFound desc = could not find container \"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291\": container with ID starting with 06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291 not found: ID does not exist" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915623 4949 scope.go:117] "RemoveContainer" containerID="4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0" Jan 20 15:39:43 crc kubenswrapper[4949]: E0120 15:39:43.915875 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0\": container with ID starting with 4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0 not found: ID does not exist" containerID="4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915898 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0"} err="failed to get container status \"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0\": rpc error: code = NotFound desc = could not find container \"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0\": container with ID starting with 4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0 not found: ID does not exist" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915912 4949 scope.go:117] "RemoveContainer" containerID="b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544" Jan 20 15:39:43 crc kubenswrapper[4949]: E0120 15:39:43.916101 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544\": container with ID starting with b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544 not found: ID does not exist" containerID="b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.916121 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544"} err="failed to get container status \"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544\": rpc error: code = NotFound desc = could not find container \"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544\": container with ID starting with b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544 not found: ID does not exist" Jan 20 15:39:44 crc kubenswrapper[4949]: I0120 15:39:44.803466 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f342955-4a85-4515-a30f-4df633975c84" path="/var/lib/kubelet/pods/1f342955-4a85-4515-a30f-4df633975c84/volumes" Jan 20 15:39:48 crc kubenswrapper[4949]: I0120 15:39:48.789130 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:39:48 crc kubenswrapper[4949]: E0120 15:39:48.790012 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:02 crc kubenswrapper[4949]: I0120 15:40:02.788720 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:02 crc kubenswrapper[4949]: E0120 15:40:02.789620 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:13 crc kubenswrapper[4949]: I0120 15:40:13.790713 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:13 crc kubenswrapper[4949]: E0120 15:40:13.791637 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:15 crc kubenswrapper[4949]: I0120 15:40:15.119788 4949 generic.go:334] "Generic (PLEG): container finished" podID="97b58b41-5a8f-47f7-af93-382d7a6f0e69" containerID="c6f70da926b771c5d1c2f1ccd50cc7324ccd775fb73876066b3a4d6d02b7e43a" exitCode=0 Jan 20 15:40:15 crc kubenswrapper[4949]: I0120 15:40:15.119864 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" event={"ID":"97b58b41-5a8f-47f7-af93-382d7a6f0e69","Type":"ContainerDied","Data":"c6f70da926b771c5d1c2f1ccd50cc7324ccd775fb73876066b3a4d6d02b7e43a"} Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.566057 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737707 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737777 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737853 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737884 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737970 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738005 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738079 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738216 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738251 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738286 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.744467 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph" (OuterVolumeSpecName: "ceph") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.760218 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm" (OuterVolumeSpecName: "kube-api-access-2rfbm") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "kube-api-access-2rfbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.760480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.763959 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.769062 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.771477 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.779543 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory" (OuterVolumeSpecName: "inventory") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.782816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.783480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.783645 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.784991 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.840745 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841206 4949 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841267 4949 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841355 4949 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841439 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841512 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841607 4949 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841667 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841721 4949 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841781 4949 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841842 4949 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:17 crc kubenswrapper[4949]: I0120 15:40:17.142821 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" event={"ID":"97b58b41-5a8f-47f7-af93-382d7a6f0e69","Type":"ContainerDied","Data":"311b2a18e1d8378252caca1377c9d806a5e7a75e15f5a57cd03a24147cb2b537"} Jan 20 15:40:17 crc kubenswrapper[4949]: I0120 15:40:17.142869 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="311b2a18e1d8378252caca1377c9d806a5e7a75e15f5a57cd03a24147cb2b537" Jan 20 15:40:17 crc kubenswrapper[4949]: I0120 15:40:17.142880 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:40:26 crc kubenswrapper[4949]: I0120 15:40:26.789617 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:26 crc kubenswrapper[4949]: E0120 15:40:26.790394 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.301162 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.303965 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="extract-content" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.303990 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="extract-content" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304022 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304031 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304060 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="extract-utilities" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304069 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="extract-utilities" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304185 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304197 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304216 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b58b41-5a8f-47f7-af93-382d7a6f0e69" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304226 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b58b41-5a8f-47f7-af93-382d7a6f0e69" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304256 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="extract-content" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304270 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="extract-content" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304302 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="extract-utilities" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304310 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="extract-utilities" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304816 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b58b41-5a8f-47f7-af93-382d7a6f0e69" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304871 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304959 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.307050 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.312012 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.312243 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332375 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332443 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332487 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332583 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332633 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332707 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332800 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-run\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332906 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzsxv\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-kube-api-access-bzsxv\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332939 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332980 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333024 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333084 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333215 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333263 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333341 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.345864 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.368952 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.382565 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.382656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.386305 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435533 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435602 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-ceph\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435704 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-scripts\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435725 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-run\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435777 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435813 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45jx\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-kube-api-access-d45jx\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-sys\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435928 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-run\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435954 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435969 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436033 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436074 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-dev\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzsxv\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-kube-api-access-bzsxv\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436122 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436177 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436202 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436264 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436277 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436290 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436313 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436335 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-run\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436384 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436821 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436895 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437661 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437743 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437781 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437810 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437831 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437890 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-lib-modules\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437914 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437937 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.438119 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.438250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.438451 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.443265 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.443668 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.443888 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.448342 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.448955 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.457413 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzsxv\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-kube-api-access-bzsxv\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539697 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-dev\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539758 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539781 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539814 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539841 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539900 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539934 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539964 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-lib-modules\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539988 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-ceph\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540051 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-scripts\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540073 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-run\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540137 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45jx\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-kube-api-access-d45jx\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-sys\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540219 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540336 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540381 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-dev\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540408 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.541044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.541112 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.541165 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.541195 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-run\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.544023 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.544777 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-ceph\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.545093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.545124 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-sys\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.545145 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-lib-modules\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.546536 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.546933 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.550009 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-scripts\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.566802 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45jx\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-kube-api-access-d45jx\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.649768 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.701040 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.897666 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.899363 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6d468" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.912045 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.947395 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.947445 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.014666 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.016378 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.019049 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.022601 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.049188 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.049278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.050100 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.050149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.050198 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.075468 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.096602 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.102710 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.105270 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.105270 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.107186 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.107393 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-csksn" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.117421 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.149394 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.150882 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152421 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx47j\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-kube-api-access-bx47j\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152453 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152511 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-ceph\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153348 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153378 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-logs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153514 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153123 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153901 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.154093 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.166714 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.172584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.222510 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254637 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-logs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254695 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254762 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254791 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254817 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254837 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzkq\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-kube-api-access-vzzkq\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254856 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254900 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254941 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx47j\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-kube-api-access-bx47j\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254960 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254986 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255002 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255052 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-ceph\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255071 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-logs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255571 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-logs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.258581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.258813 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.296254 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.299300 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-ceph\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.300611 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.302081 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.303372 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.313003 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.317603 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx47j\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-kube-api-access-bx47j\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.338612 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.347352 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.356713 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-logs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.356820 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.356878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.356905 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357114 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357179 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzzkq\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-kube-api-access-vzzkq\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357235 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357275 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357337 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357846 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.360405 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-logs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.360688 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.362925 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.366374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.371948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.372324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.380050 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzzkq\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-kube-api-access-vzzkq\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.391010 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.397909 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.426468 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.449509 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.478417 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.725904 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.946732 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.292836 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.332402 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-176a-account-create-update-gqg2s" event={"ID":"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d","Type":"ContainerStarted","Data":"e624f35bb39ec45aadfacd65516b3a22eeef144c564f72b34f72f2c1e14f8fe5"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.332455 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-176a-account-create-update-gqg2s" event={"ID":"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d","Type":"ContainerStarted","Data":"d60dc856d9dfa70c5fe4c448552f28df17c2b075c6d00e4a4b05f54ec8cd0abe"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.335399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e7cb37b-debf-462c-8a81-81ce79da0ee9","Type":"ContainerStarted","Data":"72224a88b64e3392b48bd04c97394de0c63f9a71c2f3236e2ee7a8db0ad4a025"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.338172 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"83382677-6882-49eb-a111-498346e2d6dc","Type":"ContainerStarted","Data":"4fc05c0ec4431d2066a8ab606bbb4842c66bc62ce9e8a5d896279b55d996da16"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.340471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6d468" event={"ID":"c1f501b4-e612-41a4-aef2-fdaf166aa018","Type":"ContainerStarted","Data":"7534ab81bdd16531f6d8d067d32c880d228b71ea42d0b7232ec112812a44a89c"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.340710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6d468" event={"ID":"c1f501b4-e612-41a4-aef2-fdaf166aa018","Type":"ContainerStarted","Data":"9ebe1721004a5e563ee2553c9b917f474caee423c1a76e213022afdf538240f6"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.343873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f7354f89-1113-43f0-b654-a4222ee05faf","Type":"ContainerStarted","Data":"ac9775da49df14a99eb1fa58155b73adcb489ea43bb3acd535078796dd99950a"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.358739 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-176a-account-create-update-gqg2s" podStartSLOduration=2.358713705 podStartE2EDuration="2.358713705s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:40:34.350825637 +0000 UTC m=+3030.160656515" watchObservedRunningTime="2026-01-20 15:40:34.358713705 +0000 UTC m=+3030.168544593" Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.376723 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-6d468" podStartSLOduration=2.376700683 podStartE2EDuration="2.376700683s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:40:34.364120067 +0000 UTC m=+3030.173950945" watchObservedRunningTime="2026-01-20 15:40:34.376700683 +0000 UTC m=+3030.186531541" Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.361653 4949 generic.go:334] "Generic (PLEG): container finished" podID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" containerID="e624f35bb39ec45aadfacd65516b3a22eeef144c564f72b34f72f2c1e14f8fe5" exitCode=0 Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.361767 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-176a-account-create-update-gqg2s" event={"ID":"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d","Type":"ContainerDied","Data":"e624f35bb39ec45aadfacd65516b3a22eeef144c564f72b34f72f2c1e14f8fe5"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.367686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e7cb37b-debf-462c-8a81-81ce79da0ee9","Type":"ContainerStarted","Data":"355de920866dafa64746511b96c0f241a02c39500999bdc7193f65335734bfa4"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.371009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"83382677-6882-49eb-a111-498346e2d6dc","Type":"ContainerStarted","Data":"6513cd7c8e5584e538704be3216cc31257fe1b6e202bd58db26e03a398e1a5ab"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.371088 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"83382677-6882-49eb-a111-498346e2d6dc","Type":"ContainerStarted","Data":"7bca075afa807a2a8891bc8c183c2316f637a4a481f57d1df51d0783fe02b3e2"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.381722 4949 generic.go:334] "Generic (PLEG): container finished" podID="c1f501b4-e612-41a4-aef2-fdaf166aa018" containerID="7534ab81bdd16531f6d8d067d32c880d228b71ea42d0b7232ec112812a44a89c" exitCode=0 Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.382117 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6d468" event={"ID":"c1f501b4-e612-41a4-aef2-fdaf166aa018","Type":"ContainerDied","Data":"7534ab81bdd16531f6d8d067d32c880d228b71ea42d0b7232ec112812a44a89c"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.388412 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.393362 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f7354f89-1113-43f0-b654-a4222ee05faf","Type":"ContainerStarted","Data":"939130e52640d57727d0c2eb545d6490119076149220b0697b640a9863781cf1"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.393405 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f7354f89-1113-43f0-b654-a4222ee05faf","Type":"ContainerStarted","Data":"90c01e98f19a66d691dfb325988c77e8df1030ac9826f89d9476077423e59403"} Jan 20 15:40:35 crc kubenswrapper[4949]: W0120 15:40:35.400352 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80681a49_f9f1_4208_a90e_77c74cc6860d.slice/crio-e343e8baca7dd3202fa977307e3bda2d9eb0c27f4395616db6d238aff14defe6 WatchSource:0}: Error finding container e343e8baca7dd3202fa977307e3bda2d9eb0c27f4395616db6d238aff14defe6: Status 404 returned error can't find the container with id e343e8baca7dd3202fa977307e3bda2d9eb0c27f4395616db6d238aff14defe6 Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.422837 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.06268637 podStartE2EDuration="3.422774959s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="2026-01-20 15:40:33.368887422 +0000 UTC m=+3029.178718280" lastFinishedPulling="2026-01-20 15:40:34.728976011 +0000 UTC m=+3030.538806869" observedRunningTime="2026-01-20 15:40:35.413489997 +0000 UTC m=+3031.223320875" watchObservedRunningTime="2026-01-20 15:40:35.422774959 +0000 UTC m=+3031.232605827" Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.462212 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.201578295 podStartE2EDuration="3.46219409s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="2026-01-20 15:40:33.46278202 +0000 UTC m=+3029.272612878" lastFinishedPulling="2026-01-20 15:40:34.723397815 +0000 UTC m=+3030.533228673" observedRunningTime="2026-01-20 15:40:35.448122997 +0000 UTC m=+3031.257953855" watchObservedRunningTime="2026-01-20 15:40:35.46219409 +0000 UTC m=+3031.272024948" Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.408483 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80681a49-f9f1-4208-a90e-77c74cc6860d","Type":"ContainerStarted","Data":"3100aacb59a194664bbac69672b3d4af326651f3f2d4579ba752f2ad350282e0"} Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.409136 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80681a49-f9f1-4208-a90e-77c74cc6860d","Type":"ContainerStarted","Data":"e343e8baca7dd3202fa977307e3bda2d9eb0c27f4395616db6d238aff14defe6"} Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.413395 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e7cb37b-debf-462c-8a81-81ce79da0ee9","Type":"ContainerStarted","Data":"17830b243d2c3cc0eb092a96ea4eda4ef8130e7928fdf461f04f2a71a34b96bf"} Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.930159 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.937899 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6d468" Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.948431 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.948408163 podStartE2EDuration="4.948408163s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:40:36.453510082 +0000 UTC m=+3032.263340940" watchObservedRunningTime="2026-01-20 15:40:36.948408163 +0000 UTC m=+3032.758239021" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.069793 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") pod \"c1f501b4-e612-41a4-aef2-fdaf166aa018\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.069960 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") pod \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.070033 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") pod \"c1f501b4-e612-41a4-aef2-fdaf166aa018\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.070114 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") pod \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.070763 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1f501b4-e612-41a4-aef2-fdaf166aa018" (UID: "c1f501b4-e612-41a4-aef2-fdaf166aa018"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.071509 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" (UID: "92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.079186 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g" (OuterVolumeSpecName: "kube-api-access-wv42g") pod "92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" (UID: "92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d"). InnerVolumeSpecName "kube-api-access-wv42g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.082666 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w" (OuterVolumeSpecName: "kube-api-access-t9q9w") pod "c1f501b4-e612-41a4-aef2-fdaf166aa018" (UID: "c1f501b4-e612-41a4-aef2-fdaf166aa018"). InnerVolumeSpecName "kube-api-access-t9q9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.173177 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.173220 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.173235 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.173249 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.423089 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6d468" event={"ID":"c1f501b4-e612-41a4-aef2-fdaf166aa018","Type":"ContainerDied","Data":"9ebe1721004a5e563ee2553c9b917f474caee423c1a76e213022afdf538240f6"} Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.423137 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ebe1721004a5e563ee2553c9b917f474caee423c1a76e213022afdf538240f6" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.423141 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6d468" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.427195 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80681a49-f9f1-4208-a90e-77c74cc6860d","Type":"ContainerStarted","Data":"1ccfa6cce9202c4b0c4e30b10d09699e1f66ac03a630bbe9f9fe0aa288fdea1f"} Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.428980 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-176a-account-create-update-gqg2s" event={"ID":"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d","Type":"ContainerDied","Data":"d60dc856d9dfa70c5fe4c448552f28df17c2b075c6d00e4a4b05f54ec8cd0abe"} Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.429037 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60dc856d9dfa70c5fe4c448552f28df17c2b075c6d00e4a4b05f54ec8cd0abe" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.428996 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.452997 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.452974869 podStartE2EDuration="5.452974869s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:40:37.447006951 +0000 UTC m=+3033.256837809" watchObservedRunningTime="2026-01-20 15:40:37.452974869 +0000 UTC m=+3033.262805727" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.651221 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.702131 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.371243 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:40:38 crc kubenswrapper[4949]: E0120 15:40:38.371854 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f501b4-e612-41a4-aef2-fdaf166aa018" containerName="mariadb-database-create" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.371882 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f501b4-e612-41a4-aef2-fdaf166aa018" containerName="mariadb-database-create" Jan 20 15:40:38 crc kubenswrapper[4949]: E0120 15:40:38.371910 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" containerName="mariadb-account-create-update" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.371921 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" containerName="mariadb-account-create-update" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.372162 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f501b4-e612-41a4-aef2-fdaf166aa018" containerName="mariadb-database-create" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.372207 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" containerName="mariadb-account-create-update" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.373069 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.379721 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.383490 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p7v5q" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.402653 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.495871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.495936 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.495967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.496095 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.597712 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.597783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.597988 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.598068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.604581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.605240 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.608580 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.620277 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.690069 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:39 crc kubenswrapper[4949]: I0120 15:40:39.288953 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:40:39 crc kubenswrapper[4949]: I0120 15:40:39.457732 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7rxq" event={"ID":"1501061b-c734-43b8-8f88-0d895789e209","Type":"ContainerStarted","Data":"89e14acf3507cacfb29458f2d7e350450d3744023db41d932b820c823592f772"} Jan 20 15:40:40 crc kubenswrapper[4949]: I0120 15:40:40.790947 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:40 crc kubenswrapper[4949]: E0120 15:40:40.791925 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:42 crc kubenswrapper[4949]: I0120 15:40:42.866684 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:42 crc kubenswrapper[4949]: I0120 15:40:42.993336 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.426905 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.426970 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.470895 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.473019 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.484194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.484240 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.495401 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.495465 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.535204 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.535898 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.548831 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:44 crc kubenswrapper[4949]: I0120 15:40:44.508474 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.517996 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.517980 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7rxq" event={"ID":"1501061b-c734-43b8-8f88-0d895789e209","Type":"ContainerStarted","Data":"d1185fba5ac50e378c845b742be91f772d83fecbf8e284d8d6c3788d93e191be"} Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.553624 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-q7rxq" podStartSLOduration=2.689683811 podStartE2EDuration="7.553602817s" podCreationTimestamp="2026-01-20 15:40:38 +0000 UTC" firstStartedPulling="2026-01-20 15:40:39.299873756 +0000 UTC m=+3035.109704634" lastFinishedPulling="2026-01-20 15:40:44.163792782 +0000 UTC m=+3039.973623640" observedRunningTime="2026-01-20 15:40:45.551325626 +0000 UTC m=+3041.361156484" watchObservedRunningTime="2026-01-20 15:40:45.553602817 +0000 UTC m=+3041.363433675" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.666083 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.666168 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.834548 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.944573 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.945228 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:54 crc kubenswrapper[4949]: I0120 15:40:54.789235 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:54 crc kubenswrapper[4949]: E0120 15:40:54.790081 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:55 crc kubenswrapper[4949]: I0120 15:40:55.615130 4949 generic.go:334] "Generic (PLEG): container finished" podID="1501061b-c734-43b8-8f88-0d895789e209" containerID="d1185fba5ac50e378c845b742be91f772d83fecbf8e284d8d6c3788d93e191be" exitCode=0 Jan 20 15:40:55 crc kubenswrapper[4949]: I0120 15:40:55.615196 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7rxq" event={"ID":"1501061b-c734-43b8-8f88-0d895789e209","Type":"ContainerDied","Data":"d1185fba5ac50e378c845b742be91f772d83fecbf8e284d8d6c3788d93e191be"} Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.042878 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.196063 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") pod \"1501061b-c734-43b8-8f88-0d895789e209\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.196238 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") pod \"1501061b-c734-43b8-8f88-0d895789e209\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.196392 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") pod \"1501061b-c734-43b8-8f88-0d895789e209\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.196498 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") pod \"1501061b-c734-43b8-8f88-0d895789e209\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.202455 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k" (OuterVolumeSpecName: "kube-api-access-q9n7k") pod "1501061b-c734-43b8-8f88-0d895789e209" (UID: "1501061b-c734-43b8-8f88-0d895789e209"). InnerVolumeSpecName "kube-api-access-q9n7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.205424 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data" (OuterVolumeSpecName: "config-data") pod "1501061b-c734-43b8-8f88-0d895789e209" (UID: "1501061b-c734-43b8-8f88-0d895789e209"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.206592 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "1501061b-c734-43b8-8f88-0d895789e209" (UID: "1501061b-c734-43b8-8f88-0d895789e209"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.228446 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1501061b-c734-43b8-8f88-0d895789e209" (UID: "1501061b-c734-43b8-8f88-0d895789e209"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.298668 4949 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.298722 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.298736 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.298748 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.632860 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7rxq" event={"ID":"1501061b-c734-43b8-8f88-0d895789e209","Type":"ContainerDied","Data":"89e14acf3507cacfb29458f2d7e350450d3744023db41d932b820c823592f772"} Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.632907 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e14acf3507cacfb29458f2d7e350450d3744023db41d932b820c823592f772" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.632914 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.889126 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:40:57 crc kubenswrapper[4949]: E0120 15:40:57.889934 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1501061b-c734-43b8-8f88-0d895789e209" containerName="manila-db-sync" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.889958 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1501061b-c734-43b8-8f88-0d895789e209" containerName="manila-db-sync" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.890181 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1501061b-c734-43b8-8f88-0d895789e209" containerName="manila-db-sync" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.891386 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.897730 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.898265 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p7v5q" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.898357 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.905496 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.905970 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.989822 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.991927 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.994832 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.016708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.016899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017022 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017130 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017291 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017345 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017385 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017411 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.062571 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.119420 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.119804 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.119927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120034 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120129 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120219 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120299 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120371 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120464 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120574 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120685 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120867 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.121046 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.121194 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.123780 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.132556 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.143252 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.143436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.143893 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.150191 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.152383 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hf624"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.154662 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.160384 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.165160 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hf624"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.220206 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222676 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222745 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222799 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222823 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222844 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222867 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222923 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-config\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222984 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.223070 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.223103 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.223176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.223198 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh97c\" (UniqueName: \"kubernetes.io/projected/d723357a-5423-49c3-9263-ff768f28745f-kube-api-access-bh97c\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.227311 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.228254 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.228730 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.250933 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.259739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.262563 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.265660 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.267620 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.311289 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.324811 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.324850 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh97c\" (UniqueName: \"kubernetes.io/projected/d723357a-5423-49c3-9263-ff768f28745f-kube-api-access-bh97c\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.324945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-config\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.324972 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.325036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.325058 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.328305 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.328963 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.329991 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-config\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.330483 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.331003 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.349113 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.360224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh97c\" (UniqueName: \"kubernetes.io/projected/d723357a-5423-49c3-9263-ff768f28745f-kube-api-access-bh97c\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.426597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.426845 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.426899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.426931 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.427588 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.427680 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.427738 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531498 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531759 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531811 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531848 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531907 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.532018 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.532149 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.537119 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.537453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.538181 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.549992 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.550705 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.552841 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.656252 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.674962 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.811630 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.045031 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.318240 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hf624"] Jan 20 15:40:59 crc kubenswrapper[4949]: W0120 15:40:59.393120 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a5ca1bf_b5a9_49cb_aacd_6d9ac032a888.slice/crio-d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf WatchSource:0}: Error finding container d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf: Status 404 returned error can't find the container with id d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.393494 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.659857 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerStarted","Data":"60e7d4a8ac8e4a2e8bf0ecf78ff8e8d81b95e8b85f820ad01959c6f7e2278fab"} Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.660809 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerStarted","Data":"d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf"} Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.661710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" event={"ID":"d723357a-5423-49c3-9263-ff768f28745f","Type":"ContainerStarted","Data":"ea99d43cd7edf20d061752ac24599747f7f68498741c299e4acc055102c8398a"} Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.666564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerStarted","Data":"5d5d321f43f48229f54657ca4759751495b89237e7296cf027757d58bd32dcaa"} Jan 20 15:41:00 crc kubenswrapper[4949]: I0120 15:41:00.679505 4949 generic.go:334] "Generic (PLEG): container finished" podID="d723357a-5423-49c3-9263-ff768f28745f" containerID="89c6ab7833515623f918af75e4868a7557c8c949f4c4ee87ed24783becaf2be7" exitCode=0 Jan 20 15:41:00 crc kubenswrapper[4949]: I0120 15:41:00.679895 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" event={"ID":"d723357a-5423-49c3-9263-ff768f28745f","Type":"ContainerDied","Data":"89c6ab7833515623f918af75e4868a7557c8c949f4c4ee87ed24783becaf2be7"} Jan 20 15:41:00 crc kubenswrapper[4949]: I0120 15:41:00.686996 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerStarted","Data":"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.095178 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.697614 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerStarted","Data":"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.697987 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerStarted","Data":"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.700875 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" event={"ID":"d723357a-5423-49c3-9263-ff768f28745f","Type":"ContainerStarted","Data":"69af6dca3592219f9a1f54d88e96dc5462f2f62fba6843bf2b3fb9d68f5af10c"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.701469 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.704238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerStarted","Data":"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.704909 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.716131 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.395689262 podStartE2EDuration="4.716109231s" podCreationTimestamp="2026-01-20 15:40:57 +0000 UTC" firstStartedPulling="2026-01-20 15:40:58.826861878 +0000 UTC m=+3054.636692726" lastFinishedPulling="2026-01-20 15:41:00.147281837 +0000 UTC m=+3055.957112695" observedRunningTime="2026-01-20 15:41:01.713022205 +0000 UTC m=+3057.522853053" watchObservedRunningTime="2026-01-20 15:41:01.716109231 +0000 UTC m=+3057.525940089" Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.755758 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" podStartSLOduration=3.75573326 podStartE2EDuration="3.75573326s" podCreationTimestamp="2026-01-20 15:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:01.739505639 +0000 UTC m=+3057.549336487" watchObservedRunningTime="2026-01-20 15:41:01.75573326 +0000 UTC m=+3057.565564128" Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.762010 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.761991607 podStartE2EDuration="3.761991607s" podCreationTimestamp="2026-01-20 15:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:01.756908497 +0000 UTC m=+3057.566739355" watchObservedRunningTime="2026-01-20 15:41:01.761991607 +0000 UTC m=+3057.571822465" Jan 20 15:41:02 crc kubenswrapper[4949]: I0120 15:41:02.715571 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api-log" containerID="cri-o://332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" gracePeriod=30 Jan 20 15:41:02 crc kubenswrapper[4949]: I0120 15:41:02.715599 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api" containerID="cri-o://5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" gracePeriod=30 Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.551690 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672077 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672140 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672236 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672343 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672377 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672408 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672447 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.673233 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs" (OuterVolumeSpecName: "logs") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.674749 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.683154 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.683266 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts" (OuterVolumeSpecName: "scripts") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.687744 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4" (OuterVolumeSpecName: "kube-api-access-2rfm4") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "kube-api-access-2rfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.716468 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.727667 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" exitCode=0 Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.727710 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" exitCode=143 Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728100 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728395 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerDied","Data":"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039"} Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728440 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerDied","Data":"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b"} Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728456 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerDied","Data":"d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf"} Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728475 4949 scope.go:117] "RemoveContainer" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.735533 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data" (OuterVolumeSpecName: "config-data") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.767312 4949 scope.go:117] "RemoveContainer" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774613 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774649 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774660 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774668 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774675 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774683 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774691 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.796538 4949 scope.go:117] "RemoveContainer" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" Jan 20 15:41:03 crc kubenswrapper[4949]: E0120 15:41:03.796999 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": container with ID starting with 5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039 not found: ID does not exist" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.797130 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039"} err="failed to get container status \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": rpc error: code = NotFound desc = could not find container \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": container with ID starting with 5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039 not found: ID does not exist" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.797241 4949 scope.go:117] "RemoveContainer" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" Jan 20 15:41:03 crc kubenswrapper[4949]: E0120 15:41:03.797603 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": container with ID starting with 332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b not found: ID does not exist" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.797725 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b"} err="failed to get container status \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": rpc error: code = NotFound desc = could not find container \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": container with ID starting with 332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b not found: ID does not exist" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.797830 4949 scope.go:117] "RemoveContainer" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.798159 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039"} err="failed to get container status \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": rpc error: code = NotFound desc = could not find container \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": container with ID starting with 5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039 not found: ID does not exist" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.798183 4949 scope.go:117] "RemoveContainer" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.798394 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b"} err="failed to get container status \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": rpc error: code = NotFound desc = could not find container \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": container with ID starting with 332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b not found: ID does not exist" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.060863 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.077257 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.085957 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: E0120 15:41:04.086619 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.086698 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api" Jan 20 15:41:04 crc kubenswrapper[4949]: E0120 15:41:04.086779 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api-log" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.086830 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api-log" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.087078 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api-log" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.087147 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.088102 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.090434 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.090641 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.090775 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.115243 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182106 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-etc-machine-id\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-logs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182228 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np85p\" (UniqueName: \"kubernetes.io/projected/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-kube-api-access-np85p\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182266 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182299 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-scripts\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182329 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-public-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data-custom\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182588 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284795 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np85p\" (UniqueName: \"kubernetes.io/projected/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-kube-api-access-np85p\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-scripts\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284904 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-public-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284961 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data-custom\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285081 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-etc-machine-id\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-logs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285550 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-logs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.286378 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-etc-machine-id\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.289590 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-public-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.290422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.291486 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.302106 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data-custom\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.302614 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.303093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-scripts\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.306236 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np85p\" (UniqueName: \"kubernetes.io/projected/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-kube-api-access-np85p\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.406217 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.806114 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" path="/var/lib/kubelet/pods/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888/volumes" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945089 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945341 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-central-agent" containerID="cri-o://30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1" gracePeriod=30 Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945451 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="proxy-httpd" containerID="cri-o://423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6" gracePeriod=30 Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945486 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="sg-core" containerID="cri-o://48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a" gracePeriod=30 Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945535 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-notification-agent" containerID="cri-o://f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e" gracePeriod=30 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.071147 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780547 4949 generic.go:334] "Generic (PLEG): container finished" podID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerID="423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6" exitCode=0 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780577 4949 generic.go:334] "Generic (PLEG): container finished" podID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerID="48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a" exitCode=2 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780585 4949 generic.go:334] "Generic (PLEG): container finished" podID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerID="f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e" exitCode=0 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780593 4949 generic.go:334] "Generic (PLEG): container finished" podID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerID="30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1" exitCode=0 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780613 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6"} Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780643 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a"} Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780656 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e"} Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780665 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1"} Jan 20 15:41:07 crc kubenswrapper[4949]: I0120 15:41:07.788873 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:07 crc kubenswrapper[4949]: E0120 15:41:07.789878 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.349985 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.657809 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.737243 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.737512 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="dnsmasq-dns" containerID="cri-o://632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" gracePeriod=10 Jan 20 15:41:08 crc kubenswrapper[4949]: W0120 15:41:08.783617 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d247f3c_18c5_4045_a6a5_e25dc78c33ee.slice/crio-fd7746badb8c3f8110c1d539269f7b9ff8b39c9b0907d6ddd96d53f718eb2309 WatchSource:0}: Error finding container fd7746badb8c3f8110c1d539269f7b9ff8b39c9b0907d6ddd96d53f718eb2309: Status 404 returned error can't find the container with id fd7746badb8c3f8110c1d539269f7b9ff8b39c9b0907d6ddd96d53f718eb2309 Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.818475 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9d247f3c-18c5-4045-a6a5-e25dc78c33ee","Type":"ContainerStarted","Data":"fd7746badb8c3f8110c1d539269f7b9ff8b39c9b0907d6ddd96d53f718eb2309"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.243021 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.277236 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.411405 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.411470 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.412105 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414058 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414149 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414297 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414411 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414468 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414500 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414706 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414762 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414784 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414823 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.415999 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.421662 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.455793 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts" (OuterVolumeSpecName: "scripts") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.462721 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8" (OuterVolumeSpecName: "kube-api-access-99zf8") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "kube-api-access-99zf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.468857 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22" (OuterVolumeSpecName: "kube-api-access-d8g22") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "kube-api-access-d8g22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.522942 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.523234 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.523243 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.523251 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.573796 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.627896 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.749342 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.757674 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.766751 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config" (OuterVolumeSpecName: "config") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.788236 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.795143 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.807432 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.826764 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.829969 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831229 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"8f0dd94a9e63de42a5122bf4ccb941587cc9b12585cbfa4f431123811ef49ec3"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831277 4949 scope.go:117] "RemoveContainer" containerID="423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831868 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831896 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831907 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831915 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831926 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831934 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831942 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.835531 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9d247f3c-18c5-4045-a6a5-e25dc78c33ee","Type":"ContainerStarted","Data":"702761acc29d2479b8a0e2c5fc083db526a0b9260e3b6e0d6941a0b424d45019"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.837262 4949 generic.go:334] "Generic (PLEG): container finished" podID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerID="632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" exitCode=0 Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.837313 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerDied","Data":"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.837331 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerDied","Data":"28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.837380 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.842061 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerStarted","Data":"a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.845947 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data" (OuterVolumeSpecName: "config-data") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.933839 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.994022 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.005533 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.011585 4949 scope.go:117] "RemoveContainer" containerID="48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.057860 4949 scope.go:117] "RemoveContainer" containerID="f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.082113 4949 scope.go:117] "RemoveContainer" containerID="30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.121332 4949 scope.go:117] "RemoveContainer" containerID="632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.152788 4949 scope.go:117] "RemoveContainer" containerID="8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.185317 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.197160 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.216642 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217069 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="init" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217086 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="init" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217108 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-central-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217114 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-central-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217127 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-notification-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217133 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-notification-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217147 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="dnsmasq-dns" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217153 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="dnsmasq-dns" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217170 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="sg-core" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217177 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="sg-core" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217190 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="proxy-httpd" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217196 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="proxy-httpd" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217362 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="proxy-httpd" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217375 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-notification-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217382 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-central-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217394 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="dnsmasq-dns" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217407 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="sg-core" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.219177 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.226470 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.226695 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.226888 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.230980 4949 scope.go:117] "RemoveContainer" containerID="632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.232804 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e\": container with ID starting with 632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e not found: ID does not exist" containerID="632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.232855 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e"} err="failed to get container status \"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e\": rpc error: code = NotFound desc = could not find container \"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e\": container with ID starting with 632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e not found: ID does not exist" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.232886 4949 scope.go:117] "RemoveContainer" containerID="8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.233303 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854\": container with ID starting with 8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854 not found: ID does not exist" containerID="8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.233331 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854"} err="failed to get container status \"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854\": rpc error: code = NotFound desc = could not find container \"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854\": container with ID starting with 8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854 not found: ID does not exist" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.246015 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345267 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345426 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345459 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345546 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345651 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345679 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345741 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448076 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448126 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448224 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448268 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.449443 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.453956 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.454075 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.454076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.454663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.455705 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.464621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.466649 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.615820 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.819226 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" path="/var/lib/kubelet/pods/4108fe7d-5c92-44fa-ad65-bfaee526f439/volumes" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.820834 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" path="/var/lib/kubelet/pods/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e/volumes" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.860078 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9d247f3c-18c5-4045-a6a5-e25dc78c33ee","Type":"ContainerStarted","Data":"ba81347e3e0d091efdb765becf9f0428cb5277bb150959039c96553feb449aa4"} Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.861332 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.866857 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerStarted","Data":"317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86"} Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.905312 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.905285173 podStartE2EDuration="6.905285173s" podCreationTimestamp="2026-01-20 15:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:10.891687555 +0000 UTC m=+3066.701518413" watchObservedRunningTime="2026-01-20 15:41:10.905285173 +0000 UTC m=+3066.715116061" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.932148 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.129244864 podStartE2EDuration="13.93212902s" podCreationTimestamp="2026-01-20 15:40:57 +0000 UTC" firstStartedPulling="2026-01-20 15:40:59.054564311 +0000 UTC m=+3054.864395169" lastFinishedPulling="2026-01-20 15:41:08.857448467 +0000 UTC m=+3064.667279325" observedRunningTime="2026-01-20 15:41:10.928305519 +0000 UTC m=+3066.738136377" watchObservedRunningTime="2026-01-20 15:41:10.93212902 +0000 UTC m=+3066.741959878" Jan 20 15:41:11 crc kubenswrapper[4949]: I0120 15:41:11.193994 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:11 crc kubenswrapper[4949]: I0120 15:41:11.886202 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"90ddd88b841bcb897f9b9f285d3c118a9a69ce0fe7d69cd564aa22f05881cee8"} Jan 20 15:41:12 crc kubenswrapper[4949]: I0120 15:41:12.085735 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:12 crc kubenswrapper[4949]: I0120 15:41:12.896204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6"} Jan 20 15:41:13 crc kubenswrapper[4949]: I0120 15:41:13.907723 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2"} Jan 20 15:41:14 crc kubenswrapper[4949]: I0120 15:41:14.918175 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b"} Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.945982 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551"} Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946575 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946460 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="proxy-httpd" containerID="cri-o://9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" gracePeriod=30 Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946115 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-central-agent" containerID="cri-o://dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" gracePeriod=30 Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946489 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-notification-agent" containerID="cri-o://60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" gracePeriod=30 Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946476 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="sg-core" containerID="cri-o://635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" gracePeriod=30 Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.973017 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.620784076 podStartE2EDuration="7.97299932s" podCreationTimestamp="2026-01-20 15:41:10 +0000 UTC" firstStartedPulling="2026-01-20 15:41:11.197120108 +0000 UTC m=+3067.006950956" lastFinishedPulling="2026-01-20 15:41:17.549335342 +0000 UTC m=+3073.359166200" observedRunningTime="2026-01-20 15:41:17.971928996 +0000 UTC m=+3073.781759874" watchObservedRunningTime="2026-01-20 15:41:17.97299932 +0000 UTC m=+3073.782830188" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.220822 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.619278 4949 scope.go:117] "RemoveContainer" containerID="76a5595e5cd26fffaa0ceb9cde98dcd008151ee5eae0290b95de7858f3eded5f" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.646140 4949 scope.go:117] "RemoveContainer" containerID="0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.678008 4949 scope.go:117] "RemoveContainer" containerID="b25eb601db495762aa2c1dce730d0bc786cef26614edc2df7ad8c09198618acd" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.789146 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:18 crc kubenswrapper[4949]: E0120 15:41:18.789495 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.958175 4949 generic.go:334] "Generic (PLEG): container finished" podID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerID="9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" exitCode=0 Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959342 4949 generic.go:334] "Generic (PLEG): container finished" podID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerID="635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" exitCode=2 Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959416 4949 generic.go:334] "Generic (PLEG): container finished" podID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerID="60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" exitCode=0 Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959494 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551"} Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b"} Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959752 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2"} Jan 20 15:41:19 crc kubenswrapper[4949]: I0120 15:41:19.910739 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 20 15:41:19 crc kubenswrapper[4949]: I0120 15:41:19.964676 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:19 crc kubenswrapper[4949]: I0120 15:41:19.967697 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="manila-scheduler" containerID="cri-o://ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" gracePeriod=30 Jan 20 15:41:19 crc kubenswrapper[4949]: I0120 15:41:19.967767 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="probe" containerID="cri-o://6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" gracePeriod=30 Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.635573 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.759896 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760011 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760178 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760249 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760297 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760427 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760491 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760551 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.761092 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.761409 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.761928 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.761951 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.765052 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts" (OuterVolumeSpecName: "scripts") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.766596 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx" (OuterVolumeSpecName: "kube-api-access-wgnrx") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "kube-api-access-wgnrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.808583 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.825448 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.843474 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.864888 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.864917 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.864927 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.864943 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.868569 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.930357 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data" (OuterVolumeSpecName: "config-data") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967081 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967261 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967504 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967588 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967718 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967736 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.968408 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.969048 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.969073 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.969089 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.974649 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.974755 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts" (OuterVolumeSpecName: "scripts") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.975954 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6" (OuterVolumeSpecName: "kube-api-access-m8dq6") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "kube-api-access-m8dq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981055 4949 generic.go:334] "Generic (PLEG): container finished" podID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerID="dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" exitCode=0 Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981124 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981149 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981182 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"90ddd88b841bcb897f9b9f285d3c118a9a69ce0fe7d69cd564aa22f05881cee8"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981206 4949 scope.go:117] "RemoveContainer" containerID="9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985049 4949 generic.go:334] "Generic (PLEG): container finished" podID="8ed776ab-5efa-46df-b070-54de4042b64e" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" exitCode=0 Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985079 4949 generic.go:334] "Generic (PLEG): container finished" podID="8ed776ab-5efa-46df-b070-54de4042b64e" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" exitCode=0 Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985099 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerDied","Data":"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985123 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerDied","Data":"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985146 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerDied","Data":"5d5d321f43f48229f54657ca4759751495b89237e7296cf027757d58bd32dcaa"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985222 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.017756 4949 scope.go:117] "RemoveContainer" containerID="635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.018138 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.026366 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.041735 4949 scope.go:117] "RemoveContainer" containerID="60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.046817 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047245 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="manila-scheduler" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047310 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="manila-scheduler" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047374 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-notification-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047436 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-notification-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047494 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-central-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047562 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-central-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047617 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="proxy-httpd" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047666 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="proxy-httpd" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047732 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="probe" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047873 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="probe" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.048064 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="sg-core" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048130 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="sg-core" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048449 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="proxy-httpd" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048600 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="manila-scheduler" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048677 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="sg-core" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048750 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="probe" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048810 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-notification-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048915 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-central-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.052635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.059730 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.062419 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.063466 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.062696 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.072267 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.072361 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.072443 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.072295 4949 scope.go:117] "RemoveContainer" containerID="dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.108934 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.109508 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data" (OuterVolumeSpecName: "config-data") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117174 4949 scope.go:117] "RemoveContainer" containerID="9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.117621 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551\": container with ID starting with 9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551 not found: ID does not exist" containerID="9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117660 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551"} err="failed to get container status \"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551\": rpc error: code = NotFound desc = could not find container \"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551\": container with ID starting with 9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117691 4949 scope.go:117] "RemoveContainer" containerID="635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.117940 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b\": container with ID starting with 635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b not found: ID does not exist" containerID="635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117969 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b"} err="failed to get container status \"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b\": rpc error: code = NotFound desc = could not find container \"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b\": container with ID starting with 635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117988 4949 scope.go:117] "RemoveContainer" containerID="60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.118260 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2\": container with ID starting with 60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2 not found: ID does not exist" containerID="60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.118304 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2"} err="failed to get container status \"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2\": rpc error: code = NotFound desc = could not find container \"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2\": container with ID starting with 60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.118321 4949 scope.go:117] "RemoveContainer" containerID="dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.118769 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6\": container with ID starting with dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6 not found: ID does not exist" containerID="dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.118803 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6"} err="failed to get container status \"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6\": rpc error: code = NotFound desc = could not find container \"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6\": container with ID starting with dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.118816 4949 scope.go:117] "RemoveContainer" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.141796 4949 scope.go:117] "RemoveContainer" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.160583 4949 scope.go:117] "RemoveContainer" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.160958 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": container with ID starting with 6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40 not found: ID does not exist" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.160986 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40"} err="failed to get container status \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": rpc error: code = NotFound desc = could not find container \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": container with ID starting with 6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161008 4949 scope.go:117] "RemoveContainer" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.161328 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": container with ID starting with ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e not found: ID does not exist" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161351 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e"} err="failed to get container status \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": rpc error: code = NotFound desc = could not find container \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": container with ID starting with ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161368 4949 scope.go:117] "RemoveContainer" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161794 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40"} err="failed to get container status \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": rpc error: code = NotFound desc = could not find container \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": container with ID starting with 6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161840 4949 scope.go:117] "RemoveContainer" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.162064 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e"} err="failed to get container status \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": rpc error: code = NotFound desc = could not find container \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": container with ID starting with ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.179822 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.179872 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99nm\" (UniqueName: \"kubernetes.io/projected/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-kube-api-access-j99nm\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.179900 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.179922 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-config-data\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180017 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-scripts\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180115 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180159 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180258 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180275 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281584 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99nm\" (UniqueName: \"kubernetes.io/projected/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-kube-api-access-j99nm\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281671 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281690 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-config-data\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281742 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-scripts\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281776 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281842 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281884 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.282197 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.283031 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.285415 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-scripts\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.286767 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-config-data\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.287361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.299323 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.302330 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99nm\" (UniqueName: \"kubernetes.io/projected/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-kube-api-access-j99nm\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.303476 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.389283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.400438 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.413966 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.425246 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.427167 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.437679 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.441632 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613329 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613670 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rj5\" (UniqueName: \"kubernetes.io/projected/acbf90ca-14f6-4274-b63b-f4e71c1ce845-kube-api-access-t8rj5\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613740 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-scripts\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613780 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613920 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acbf90ca-14f6-4274-b63b-f4e71c1ce845-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613976 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acbf90ca-14f6-4274-b63b-f4e71c1ce845-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716184 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716218 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acbf90ca-14f6-4274-b63b-f4e71c1ce845-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716250 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716495 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8rj5\" (UniqueName: \"kubernetes.io/projected/acbf90ca-14f6-4274-b63b-f4e71c1ce845-kube-api-access-t8rj5\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716581 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-scripts\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716615 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.721039 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.729120 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.729160 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-scripts\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.729621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.735226 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8rj5\" (UniqueName: \"kubernetes.io/projected/acbf90ca-14f6-4274-b63b-f4e71c1ce845-kube-api-access-t8rj5\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.837876 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.863158 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: W0120 15:41:21.869687 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3ddebe6_ef20_4de2_9eaa_690312bbbf0a.slice/crio-47e3f82c7c11d1c39c74447f14ec406c8ff24d2b0613257eaf5f3cd2621f0289 WatchSource:0}: Error finding container 47e3f82c7c11d1c39c74447f14ec406c8ff24d2b0613257eaf5f3cd2621f0289: Status 404 returned error can't find the container with id 47e3f82c7c11d1c39c74447f14ec406c8ff24d2b0613257eaf5f3cd2621f0289 Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.993515 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"47e3f82c7c11d1c39c74447f14ec406c8ff24d2b0613257eaf5f3cd2621f0289"} Jan 20 15:41:22 crc kubenswrapper[4949]: W0120 15:41:22.355756 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacbf90ca_14f6_4274_b63b_f4e71c1ce845.slice/crio-551e32616baff5790371df9918526a7d72d356b601d93e3bc454b29fe9fe99bb WatchSource:0}: Error finding container 551e32616baff5790371df9918526a7d72d356b601d93e3bc454b29fe9fe99bb: Status 404 returned error can't find the container with id 551e32616baff5790371df9918526a7d72d356b601d93e3bc454b29fe9fe99bb Jan 20 15:41:22 crc kubenswrapper[4949]: I0120 15:41:22.361257 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:22 crc kubenswrapper[4949]: I0120 15:41:22.800095 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" path="/var/lib/kubelet/pods/8ed776ab-5efa-46df-b070-54de4042b64e/volumes" Jan 20 15:41:22 crc kubenswrapper[4949]: I0120 15:41:22.801534 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" path="/var/lib/kubelet/pods/b17688bb-6e3e-4b48-bffa-bf1383aa47a1/volumes" Jan 20 15:41:23 crc kubenswrapper[4949]: I0120 15:41:23.039854 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"acbf90ca-14f6-4274-b63b-f4e71c1ce845","Type":"ContainerStarted","Data":"9d6e344563f46fcb47a17dca04e6553f906ca62203c3e44a70be2a6e915e2a43"} Jan 20 15:41:23 crc kubenswrapper[4949]: I0120 15:41:23.039915 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"acbf90ca-14f6-4274-b63b-f4e71c1ce845","Type":"ContainerStarted","Data":"551e32616baff5790371df9918526a7d72d356b601d93e3bc454b29fe9fe99bb"} Jan 20 15:41:23 crc kubenswrapper[4949]: I0120 15:41:23.067731 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"3fd48d6989ae3a58a944703ba4e5d06481069dec4db5548c5db9d8de833a165a"} Jan 20 15:41:24 crc kubenswrapper[4949]: I0120 15:41:24.080369 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"acbf90ca-14f6-4274-b63b-f4e71c1ce845","Type":"ContainerStarted","Data":"1fa86fee69bc75f97813c969dd9680c1d3fcbb4983ff73cdfde6219d94bfb4e9"} Jan 20 15:41:24 crc kubenswrapper[4949]: I0120 15:41:24.121000 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.12097679 podStartE2EDuration="3.12097679s" podCreationTimestamp="2026-01-20 15:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:24.104950124 +0000 UTC m=+3079.914781002" watchObservedRunningTime="2026-01-20 15:41:24.12097679 +0000 UTC m=+3079.930807648" Jan 20 15:41:25 crc kubenswrapper[4949]: I0120 15:41:25.788110 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 20 15:41:27 crc kubenswrapper[4949]: I0120 15:41:27.117150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"8ed49821affc6ebd664cee7c204a957d406ecac6f3be7536e5393efd5351dc20"} Jan 20 15:41:28 crc kubenswrapper[4949]: I0120 15:41:28.128730 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"9318a3539773c714bb7ddcdee14692d7a15c6cf4713f5d5fd44856adb68ca5ef"} Jan 20 15:41:29 crc kubenswrapper[4949]: I0120 15:41:29.728849 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 20 15:41:29 crc kubenswrapper[4949]: I0120 15:41:29.825006 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:30 crc kubenswrapper[4949]: I0120 15:41:30.155086 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="manila-share" containerID="cri-o://a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795" gracePeriod=30 Jan 20 15:41:30 crc kubenswrapper[4949]: I0120 15:41:30.156358 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="probe" containerID="cri-o://317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86" gracePeriod=30 Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.166474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"f0563b5270f17a5dd68925da295f7260a8e6cdad02ab8e79f06c169bbdcf674b"} Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.167121 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.169045 4949 generic.go:334] "Generic (PLEG): container finished" podID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerID="317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86" exitCode=0 Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.169068 4949 generic.go:334] "Generic (PLEG): container finished" podID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerID="a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795" exitCode=1 Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.169085 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerDied","Data":"317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86"} Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.169103 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerDied","Data":"a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795"} Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.207316 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9236957989999999 podStartE2EDuration="10.20726853s" podCreationTimestamp="2026-01-20 15:41:21 +0000 UTC" firstStartedPulling="2026-01-20 15:41:21.874098602 +0000 UTC m=+3077.683929460" lastFinishedPulling="2026-01-20 15:41:30.157671333 +0000 UTC m=+3085.967502191" observedRunningTime="2026-01-20 15:41:31.191034519 +0000 UTC m=+3087.000865377" watchObservedRunningTime="2026-01-20 15:41:31.20726853 +0000 UTC m=+3087.017099408" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.397815 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533099 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533538 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533586 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533697 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533779 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533804 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533854 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533896 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.534281 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.534370 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.539129 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts" (OuterVolumeSpecName: "scripts") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.539342 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.540213 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph" (OuterVolumeSpecName: "ceph") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.549880 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k" (OuterVolumeSpecName: "kube-api-access-mwm9k") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "kube-api-access-mwm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.589564 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.627731 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data" (OuterVolumeSpecName: "config-data") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637340 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637375 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637390 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637399 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637407 4949 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637415 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637424 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637432 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.838443 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.197644 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.201453 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerDied","Data":"60e7d4a8ac8e4a2e8bf0ecf78ff8e8d81b95e8b85f820ad01959c6f7e2278fab"} Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.201495 4949 scope.go:117] "RemoveContainer" containerID="317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.236265 4949 scope.go:117] "RemoveContainer" containerID="a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.253864 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.265267 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.277751 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:32 crc kubenswrapper[4949]: E0120 15:41:32.278299 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="manila-share" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.278323 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="manila-share" Jan 20 15:41:32 crc kubenswrapper[4949]: E0120 15:41:32.278359 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="probe" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.278368 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="probe" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.283044 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="probe" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.283114 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="manila-share" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.285230 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.287350 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.291196 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377347 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz8xn\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-kube-api-access-dz8xn\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377446 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377506 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-ceph\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377539 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377587 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377665 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377754 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-scripts\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377779 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480041 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480119 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-scripts\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz8xn\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-kube-api-access-dz8xn\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480296 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480327 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-ceph\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480378 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480500 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.484821 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.484900 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-ceph\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.485326 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.485569 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.493614 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-scripts\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.497628 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz8xn\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-kube-api-access-dz8xn\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.622237 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.788718 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:32 crc kubenswrapper[4949]: E0120 15:41:32.789274 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.799664 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" path="/var/lib/kubelet/pods/c4221b9c-f2d4-437c-9b6c-1b9341a74219/volumes" Jan 20 15:41:33 crc kubenswrapper[4949]: I0120 15:41:33.144991 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:33 crc kubenswrapper[4949]: I0120 15:41:33.205908 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3","Type":"ContainerStarted","Data":"66e30ce5a65196e0578a77c11f58ba35eb1fdea646668071e499e612a1115c49"} Jan 20 15:41:34 crc kubenswrapper[4949]: I0120 15:41:34.218213 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3","Type":"ContainerStarted","Data":"8202b7507d3576386b01c6b4b86ee50037ba30a968180c26c9606bf5a1d14731"} Jan 20 15:41:34 crc kubenswrapper[4949]: I0120 15:41:34.218500 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3","Type":"ContainerStarted","Data":"cd2a419f428ffcb48607715118b8db44746180ba3074dd046ff563c69163191a"} Jan 20 15:41:34 crc kubenswrapper[4949]: I0120 15:41:34.241693 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.241664228 podStartE2EDuration="2.241664228s" podCreationTimestamp="2026-01-20 15:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:34.240057307 +0000 UTC m=+3090.049888175" watchObservedRunningTime="2026-01-20 15:41:34.241664228 +0000 UTC m=+3090.051495086" Jan 20 15:41:42 crc kubenswrapper[4949]: I0120 15:41:42.623236 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 20 15:41:43 crc kubenswrapper[4949]: I0120 15:41:43.301400 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 20 15:41:43 crc kubenswrapper[4949]: I0120 15:41:43.788919 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:43 crc kubenswrapper[4949]: E0120 15:41:43.789167 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:41:51 crc kubenswrapper[4949]: I0120 15:41:51.398500 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:41:54 crc kubenswrapper[4949]: I0120 15:41:54.146250 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 20 15:41:56 crc kubenswrapper[4949]: I0120 15:41:56.789211 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:56 crc kubenswrapper[4949]: E0120 15:41:56.789969 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:42:10 crc kubenswrapper[4949]: I0120 15:42:10.789993 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:42:10 crc kubenswrapper[4949]: E0120 15:42:10.791005 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:42:24 crc kubenswrapper[4949]: I0120 15:42:24.796877 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:42:24 crc kubenswrapper[4949]: E0120 15:42:24.797872 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:42:36 crc kubenswrapper[4949]: I0120 15:42:36.789202 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:42:36 crc kubenswrapper[4949]: E0120 15:42:36.790061 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:42:50 crc kubenswrapper[4949]: I0120 15:42:50.789726 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:42:50 crc kubenswrapper[4949]: E0120 15:42:50.790301 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:43:02 crc kubenswrapper[4949]: I0120 15:43:02.789325 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:43:02 crc kubenswrapper[4949]: E0120 15:43:02.790555 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:43:14 crc kubenswrapper[4949]: I0120 15:43:14.795025 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:43:14 crc kubenswrapper[4949]: E0120 15:43:14.795863 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:43:29 crc kubenswrapper[4949]: I0120 15:43:29.790732 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:43:30 crc kubenswrapper[4949]: I0120 15:43:30.447085 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1"} Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.022829 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.025619 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.059751 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.180653 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.181110 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.181312 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.283231 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.283316 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.283355 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.284009 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.284014 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.334758 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.375329 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.904916 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:44:52 crc kubenswrapper[4949]: I0120 15:44:52.232418 4949 generic.go:334] "Generic (PLEG): container finished" podID="a858c71e-19cb-4464-91f5-366a6695586c" containerID="b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73" exitCode=0 Jan 20 15:44:52 crc kubenswrapper[4949]: I0120 15:44:52.232471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerDied","Data":"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73"} Jan 20 15:44:52 crc kubenswrapper[4949]: I0120 15:44:52.232773 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerStarted","Data":"e9cde6f8f0b1fe32ffe869b0de4686fd3fb41b702a8f3a6a856dc7ae163544cc"} Jan 20 15:44:52 crc kubenswrapper[4949]: I0120 15:44:52.235733 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:44:53 crc kubenswrapper[4949]: I0120 15:44:53.248776 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerStarted","Data":"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3"} Jan 20 15:44:54 crc kubenswrapper[4949]: I0120 15:44:54.259152 4949 generic.go:334] "Generic (PLEG): container finished" podID="a858c71e-19cb-4464-91f5-366a6695586c" containerID="abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3" exitCode=0 Jan 20 15:44:54 crc kubenswrapper[4949]: I0120 15:44:54.259212 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerDied","Data":"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3"} Jan 20 15:44:56 crc kubenswrapper[4949]: I0120 15:44:56.284460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerStarted","Data":"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6"} Jan 20 15:44:56 crc kubenswrapper[4949]: I0120 15:44:56.309912 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdm6q" podStartSLOduration=2.452525749 podStartE2EDuration="5.309896654s" podCreationTimestamp="2026-01-20 15:44:51 +0000 UTC" firstStartedPulling="2026-01-20 15:44:52.235511303 +0000 UTC m=+3288.045342161" lastFinishedPulling="2026-01-20 15:44:55.092882218 +0000 UTC m=+3290.902713066" observedRunningTime="2026-01-20 15:44:56.305672471 +0000 UTC m=+3292.115503329" watchObservedRunningTime="2026-01-20 15:44:56.309896654 +0000 UTC m=+3292.119727512" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.163431 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f"] Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.165373 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.167362 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.167444 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.174173 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f"] Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.274804 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.275194 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.275273 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.376471 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.376644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.376665 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.377634 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.384599 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.397119 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.540102 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.018078 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f"] Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.333763 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" event={"ID":"4d0ce17c-1d57-4af7-b417-1ab6838117c8","Type":"ContainerStarted","Data":"f99dccecfed096173d8e99ec6d4c12bef0a1b039b8d9ebb5544f5d3425076ef4"} Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.334102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" event={"ID":"4d0ce17c-1d57-4af7-b417-1ab6838117c8","Type":"ContainerStarted","Data":"9c93255f0127d69f3aa827e15d1137974f98dc4d2e41e90fa555d2a6d7823453"} Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.351696 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" podStartSLOduration=1.351674643 podStartE2EDuration="1.351674643s" podCreationTimestamp="2026-01-20 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:45:01.349473544 +0000 UTC m=+3297.159304402" watchObservedRunningTime="2026-01-20 15:45:01.351674643 +0000 UTC m=+3297.161505501" Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.375698 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.375750 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.447748 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:02 crc kubenswrapper[4949]: I0120 15:45:02.344317 4949 generic.go:334] "Generic (PLEG): container finished" podID="4d0ce17c-1d57-4af7-b417-1ab6838117c8" containerID="f99dccecfed096173d8e99ec6d4c12bef0a1b039b8d9ebb5544f5d3425076ef4" exitCode=0 Jan 20 15:45:02 crc kubenswrapper[4949]: I0120 15:45:02.345906 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" event={"ID":"4d0ce17c-1d57-4af7-b417-1ab6838117c8","Type":"ContainerDied","Data":"f99dccecfed096173d8e99ec6d4c12bef0a1b039b8d9ebb5544f5d3425076ef4"} Jan 20 15:45:02 crc kubenswrapper[4949]: I0120 15:45:02.407549 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:02 crc kubenswrapper[4949]: I0120 15:45:02.481885 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.727573 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.856032 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") pod \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.856258 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") pod \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.856296 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") pod \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.857026 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d0ce17c-1d57-4af7-b417-1ab6838117c8" (UID: "4d0ce17c-1d57-4af7-b417-1ab6838117c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.857405 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.862699 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d0ce17c-1d57-4af7-b417-1ab6838117c8" (UID: "4d0ce17c-1d57-4af7-b417-1ab6838117c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.863596 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm" (OuterVolumeSpecName: "kube-api-access-4hjpm") pod "4d0ce17c-1d57-4af7-b417-1ab6838117c8" (UID: "4d0ce17c-1d57-4af7-b417-1ab6838117c8"). InnerVolumeSpecName "kube-api-access-4hjpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.959780 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.959821 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.365731 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" event={"ID":"4d0ce17c-1d57-4af7-b417-1ab6838117c8","Type":"ContainerDied","Data":"9c93255f0127d69f3aa827e15d1137974f98dc4d2e41e90fa555d2a6d7823453"} Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.366019 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c93255f0127d69f3aa827e15d1137974f98dc4d2e41e90fa555d2a6d7823453" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.365859 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdm6q" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="registry-server" containerID="cri-o://67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" gracePeriod=2 Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.365790 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.426639 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.435826 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.800534 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591138ca-7bcb-4584-8089-82e6223d1457" path="/var/lib/kubelet/pods/591138ca-7bcb-4584-8089-82e6223d1457/volumes" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.867062 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.980755 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") pod \"a858c71e-19cb-4464-91f5-366a6695586c\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.981594 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") pod \"a858c71e-19cb-4464-91f5-366a6695586c\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.981730 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") pod \"a858c71e-19cb-4464-91f5-366a6695586c\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.981793 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities" (OuterVolumeSpecName: "utilities") pod "a858c71e-19cb-4464-91f5-366a6695586c" (UID: "a858c71e-19cb-4464-91f5-366a6695586c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.982369 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.986988 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw" (OuterVolumeSpecName: "kube-api-access-fq5lw") pod "a858c71e-19cb-4464-91f5-366a6695586c" (UID: "a858c71e-19cb-4464-91f5-366a6695586c"). InnerVolumeSpecName "kube-api-access-fq5lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.000142 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a858c71e-19cb-4464-91f5-366a6695586c" (UID: "a858c71e-19cb-4464-91f5-366a6695586c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.084377 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.084415 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379391 4949 generic.go:334] "Generic (PLEG): container finished" podID="a858c71e-19cb-4464-91f5-366a6695586c" containerID="67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" exitCode=0 Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379502 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379568 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerDied","Data":"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6"} Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerDied","Data":"e9cde6f8f0b1fe32ffe869b0de4686fd3fb41b702a8f3a6a856dc7ae163544cc"} Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379740 4949 scope.go:117] "RemoveContainer" containerID="67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.417915 4949 scope.go:117] "RemoveContainer" containerID="abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.446680 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.448196 4949 scope.go:117] "RemoveContainer" containerID="b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.458980 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.499566 4949 scope.go:117] "RemoveContainer" containerID="67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" Jan 20 15:45:05 crc kubenswrapper[4949]: E0120 15:45:05.499995 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6\": container with ID starting with 67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6 not found: ID does not exist" containerID="67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.500039 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6"} err="failed to get container status \"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6\": rpc error: code = NotFound desc = could not find container \"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6\": container with ID starting with 67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6 not found: ID does not exist" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.500068 4949 scope.go:117] "RemoveContainer" containerID="abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3" Jan 20 15:45:05 crc kubenswrapper[4949]: E0120 15:45:05.500373 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3\": container with ID starting with abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3 not found: ID does not exist" containerID="abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.500404 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3"} err="failed to get container status \"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3\": rpc error: code = NotFound desc = could not find container \"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3\": container with ID starting with abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3 not found: ID does not exist" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.500454 4949 scope.go:117] "RemoveContainer" containerID="b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73" Jan 20 15:45:05 crc kubenswrapper[4949]: E0120 15:45:05.501089 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73\": container with ID starting with b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73 not found: ID does not exist" containerID="b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.501141 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73"} err="failed to get container status \"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73\": rpc error: code = NotFound desc = could not find container \"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73\": container with ID starting with b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73 not found: ID does not exist" Jan 20 15:45:06 crc kubenswrapper[4949]: I0120 15:45:06.805826 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a858c71e-19cb-4464-91f5-366a6695586c" path="/var/lib/kubelet/pods/a858c71e-19cb-4464-91f5-366a6695586c/volumes" Jan 20 15:45:18 crc kubenswrapper[4949]: I0120 15:45:18.883842 4949 scope.go:117] "RemoveContainer" containerID="4ff5f836d3d163418d95ceb0986956f845ac79923a1ad3950a5ae54e3538d3fc" Jan 20 15:45:57 crc kubenswrapper[4949]: I0120 15:45:57.152375 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:45:57 crc kubenswrapper[4949]: I0120 15:45:57.153334 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.655771 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:04 crc kubenswrapper[4949]: E0120 15:46:04.656726 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="extract-content" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656740 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="extract-content" Jan 20 15:46:04 crc kubenswrapper[4949]: E0120 15:46:04.656759 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="registry-server" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656767 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="registry-server" Jan 20 15:46:04 crc kubenswrapper[4949]: E0120 15:46:04.656783 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="extract-utilities" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656788 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="extract-utilities" Jan 20 15:46:04 crc kubenswrapper[4949]: E0120 15:46:04.656798 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0ce17c-1d57-4af7-b417-1ab6838117c8" containerName="collect-profiles" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656804 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0ce17c-1d57-4af7-b417-1ab6838117c8" containerName="collect-profiles" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656957 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0ce17c-1d57-4af7-b417-1ab6838117c8" containerName="collect-profiles" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656969 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="registry-server" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.658604 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.668556 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.690688 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.690726 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.690773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.793002 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.793342 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.793395 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.793479 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.794074 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.815153 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.993452 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:05 crc kubenswrapper[4949]: I0120 15:46:05.516154 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:05 crc kubenswrapper[4949]: W0120 15:46:05.519843 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a79a8b8_cc72_4615_afc6_1710a61d29e6.slice/crio-4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c WatchSource:0}: Error finding container 4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c: Status 404 returned error can't find the container with id 4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c Jan 20 15:46:06 crc kubenswrapper[4949]: I0120 15:46:06.099947 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerID="e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e" exitCode=0 Jan 20 15:46:06 crc kubenswrapper[4949]: I0120 15:46:06.100106 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerDied","Data":"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e"} Jan 20 15:46:06 crc kubenswrapper[4949]: I0120 15:46:06.100425 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerStarted","Data":"4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c"} Jan 20 15:46:07 crc kubenswrapper[4949]: I0120 15:46:07.109909 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerStarted","Data":"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3"} Jan 20 15:46:08 crc kubenswrapper[4949]: I0120 15:46:08.137842 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerID="45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3" exitCode=0 Jan 20 15:46:08 crc kubenswrapper[4949]: I0120 15:46:08.137923 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerDied","Data":"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3"} Jan 20 15:46:09 crc kubenswrapper[4949]: I0120 15:46:09.149918 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerStarted","Data":"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c"} Jan 20 15:46:09 crc kubenswrapper[4949]: I0120 15:46:09.182918 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tjvnw" podStartSLOduration=2.546728817 podStartE2EDuration="5.18290262s" podCreationTimestamp="2026-01-20 15:46:04 +0000 UTC" firstStartedPulling="2026-01-20 15:46:06.104144948 +0000 UTC m=+3361.913975816" lastFinishedPulling="2026-01-20 15:46:08.740318721 +0000 UTC m=+3364.550149619" observedRunningTime="2026-01-20 15:46:09.177059366 +0000 UTC m=+3364.986890254" watchObservedRunningTime="2026-01-20 15:46:09.18290262 +0000 UTC m=+3364.992733478" Jan 20 15:46:14 crc kubenswrapper[4949]: I0120 15:46:14.993607 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:14 crc kubenswrapper[4949]: I0120 15:46:14.994152 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:15 crc kubenswrapper[4949]: I0120 15:46:15.047913 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:15 crc kubenswrapper[4949]: I0120 15:46:15.249803 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:15 crc kubenswrapper[4949]: I0120 15:46:15.294551 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.226773 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tjvnw" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="registry-server" containerID="cri-o://8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" gracePeriod=2 Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.713617 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.760893 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") pod \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.761026 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") pod \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.761081 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") pod \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.761639 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities" (OuterVolumeSpecName: "utilities") pod "3a79a8b8-cc72-4615-afc6-1710a61d29e6" (UID: "3a79a8b8-cc72-4615-afc6-1710a61d29e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.770814 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd" (OuterVolumeSpecName: "kube-api-access-fvxrd") pod "3a79a8b8-cc72-4615-afc6-1710a61d29e6" (UID: "3a79a8b8-cc72-4615-afc6-1710a61d29e6"). InnerVolumeSpecName "kube-api-access-fvxrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.812246 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a79a8b8-cc72-4615-afc6-1710a61d29e6" (UID: "3a79a8b8-cc72-4615-afc6-1710a61d29e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.864045 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.864080 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") on node \"crc\" DevicePath \"\"" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.864091 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.238979 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerID="8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" exitCode=0 Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.239036 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerDied","Data":"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c"} Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.239381 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerDied","Data":"4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c"} Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.239414 4949 scope.go:117] "RemoveContainer" containerID="8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.239045 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.263309 4949 scope.go:117] "RemoveContainer" containerID="45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.296983 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.298104 4949 scope.go:117] "RemoveContainer" containerID="e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.313854 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358119 4949 scope.go:117] "RemoveContainer" containerID="8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" Jan 20 15:46:18 crc kubenswrapper[4949]: E0120 15:46:18.358480 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c\": container with ID starting with 8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c not found: ID does not exist" containerID="8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358534 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c"} err="failed to get container status \"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c\": rpc error: code = NotFound desc = could not find container \"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c\": container with ID starting with 8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c not found: ID does not exist" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358562 4949 scope.go:117] "RemoveContainer" containerID="45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3" Jan 20 15:46:18 crc kubenswrapper[4949]: E0120 15:46:18.358828 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3\": container with ID starting with 45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3 not found: ID does not exist" containerID="45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358857 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3"} err="failed to get container status \"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3\": rpc error: code = NotFound desc = could not find container \"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3\": container with ID starting with 45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3 not found: ID does not exist" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358876 4949 scope.go:117] "RemoveContainer" containerID="e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e" Jan 20 15:46:18 crc kubenswrapper[4949]: E0120 15:46:18.359287 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e\": container with ID starting with e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e not found: ID does not exist" containerID="e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.359318 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e"} err="failed to get container status \"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e\": rpc error: code = NotFound desc = could not find container \"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e\": container with ID starting with e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e not found: ID does not exist" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.804996 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" path="/var/lib/kubelet/pods/3a79a8b8-cc72-4615-afc6-1710a61d29e6/volumes" Jan 20 15:46:27 crc kubenswrapper[4949]: I0120 15:46:27.151895 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:46:27 crc kubenswrapper[4949]: I0120 15:46:27.152375 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.152888 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.153697 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.153770 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.155115 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.155225 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1" gracePeriod=600 Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.707689 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1" exitCode=0 Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.707806 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1"} Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.707941 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf"} Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.707965 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.749828 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:47:11 crc kubenswrapper[4949]: E0120 15:47:11.750821 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="extract-utilities" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.750837 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="extract-utilities" Jan 20 15:47:11 crc kubenswrapper[4949]: E0120 15:47:11.750858 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="registry-server" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.750866 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="registry-server" Jan 20 15:47:11 crc kubenswrapper[4949]: E0120 15:47:11.750878 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="extract-content" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.750886 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="extract-content" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.751126 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="registry-server" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.752817 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.766047 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kk8nn"/"default-dockercfg-ldpvs" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.766082 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kk8nn"/"openshift-service-ca.crt" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.766178 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kk8nn"/"kube-root-ca.crt" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.770187 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.937873 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.938281 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.039991 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.040123 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.040812 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.057922 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.087572 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.636906 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.868915 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/must-gather-ccspq" event={"ID":"3cf0a23e-747e-442b-b15a-d9db29607be8","Type":"ContainerStarted","Data":"8d775bd2daca3a9966099b7b0b03300db83c7b661bbfa1ed54825c53ac39aac9"} Jan 20 15:47:20 crc kubenswrapper[4949]: I0120 15:47:20.987071 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/must-gather-ccspq" event={"ID":"3cf0a23e-747e-442b-b15a-d9db29607be8","Type":"ContainerStarted","Data":"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f"} Jan 20 15:47:20 crc kubenswrapper[4949]: I0120 15:47:20.987756 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/must-gather-ccspq" event={"ID":"3cf0a23e-747e-442b-b15a-d9db29607be8","Type":"ContainerStarted","Data":"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f"} Jan 20 15:47:21 crc kubenswrapper[4949]: I0120 15:47:21.022494 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kk8nn/must-gather-ccspq" podStartSLOduration=2.782368158 podStartE2EDuration="10.022463041s" podCreationTimestamp="2026-01-20 15:47:11 +0000 UTC" firstStartedPulling="2026-01-20 15:47:12.641860598 +0000 UTC m=+3428.451691456" lastFinishedPulling="2026-01-20 15:47:19.881955481 +0000 UTC m=+3435.691786339" observedRunningTime="2026-01-20 15:47:21.010617997 +0000 UTC m=+3436.820448915" watchObservedRunningTime="2026-01-20 15:47:21.022463041 +0000 UTC m=+3436.832293939" Jan 20 15:47:23 crc kubenswrapper[4949]: E0120 15:47:23.357826 4949 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.41:52012->38.102.83.41:36705: write tcp 38.102.83.41:52012->38.102.83.41:36705: write: broken pipe Jan 20 15:47:23 crc kubenswrapper[4949]: I0120 15:47:23.924612 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-j9l58"] Jan 20 15:47:23 crc kubenswrapper[4949]: I0120 15:47:23.926004 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:23 crc kubenswrapper[4949]: I0120 15:47:23.962817 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:23 crc kubenswrapper[4949]: I0120 15:47:23.963006 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.064835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.064986 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.064991 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.088467 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.243176 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:25 crc kubenswrapper[4949]: I0120 15:47:25.029462 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" event={"ID":"a01ac885-313b-4cac-ad73-abd4dd2c9f97","Type":"ContainerStarted","Data":"6d89f9e66aa529d3f5a75012a12ed5c4b38240802264c61527ea25cdb6ad0dad"} Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.721333 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-ffcb5df54-fhbnh_25689957-1a77-40ab-8a4c-1e40a1524bac/barbican-api-log/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.731767 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-ffcb5df54-fhbnh_25689957-1a77-40ab-8a4c-1e40a1524bac/barbican-api/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.770620 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56bfc57b96-w7nhj_02b718a3-85a6-4bb6-9e17-9ff6936cb5c4/barbican-keystone-listener-log/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.778071 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56bfc57b96-w7nhj_02b718a3-85a6-4bb6-9e17-9ff6936cb5c4/barbican-keystone-listener/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.822908 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84d486fc9-sgwzr_0f7e061d-75da-4fc4-80c8-1163e314ebb5/barbican-worker-log/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.837731 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84d486fc9-sgwzr_0f7e061d-75da-4fc4-80c8-1163e314ebb5/barbican-worker/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.892970 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh_da7cee45-2ef4-4ebc-8067-08dbe10af76a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.917684 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b3ddebe6-ef20-4de2-9eaa-690312bbbf0a/ceilometer-central-agent/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.938662 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b3ddebe6-ef20-4de2-9eaa-690312bbbf0a/ceilometer-notification-agent/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.943841 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b3ddebe6-ef20-4de2-9eaa-690312bbbf0a/sg-core/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.953060 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b3ddebe6-ef20-4de2-9eaa-690312bbbf0a/proxy-httpd/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.969708 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-tp625_70d9d029-15fb-479a-b668-926d3167b179/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.001633 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv_9f5697b2-a2f0-4b5c-949a-0f52e9e39beb/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.015635 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_605e8425-f80d-4cd4-981d-afb431ec676f/cinder-api-log/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.056205 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_605e8425-f80d-4cd4-981d-afb431ec676f/cinder-api/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.272859 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f7354f89-1113-43f0-b654-a4222ee05faf/cinder-backup/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.287923 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f7354f89-1113-43f0-b654-a4222ee05faf/probe/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.316670 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ef233e09-2d4d-4f12-9adf-e1bab1dcd101/cinder-scheduler/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.338270 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ef233e09-2d4d-4f12-9adf-e1bab1dcd101/probe/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.398123 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_83382677-6882-49eb-a111-498346e2d6dc/cinder-volume/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.416511 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_83382677-6882-49eb-a111-498346e2d6dc/probe/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.430270 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-x77fv_6951e28c-3b02-44dd-9823-d0e4d1a779d5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.450822 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pscmc_aa357e67-831a-4584-bf56-0c2e58d1aed8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.466875 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hf624_d723357a-5423-49c3-9263-ff768f28745f/dnsmasq-dns/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.476128 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hf624_d723357a-5423-49c3-9263-ff768f28745f/init/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.493966 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2e7cb37b-debf-462c-8a81-81ce79da0ee9/glance-log/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.506592 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2e7cb37b-debf-462c-8a81-81ce79da0ee9/glance-httpd/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.515659 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_80681a49-f9f1-4208-a90e-77c74cc6860d/glance-log/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.531496 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_80681a49-f9f1-4208-a90e-77c74cc6860d/glance-httpd/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.892460 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66d45cfc44-ltr94_08182d24-cea6-4daa-9dbb-efcb48b76434/horizon-log/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.992324 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66d45cfc44-ltr94_08182d24-cea6-4daa-9dbb-efcb48b76434/horizon/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.014943 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb_d1ff69ad-f42e-4882-a580-c2fc212ab3a4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.043714 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fqp9f_a8ca811b-8738-49ed-b552-bdf38a5d5650/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.137420 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b69c674cf-wdfrq_7dd53c2b-505a-4783-9e2a-34857e6158ea/keystone-api/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.151390 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4a8d0e18-297d-407d-8c7c-64555052b960/kube-state-metrics/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.195351 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-9xbns_ccd4282a-7ba2-4eda-9078-00d3f0ff58c4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.203281 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-176a-account-create-update-gqg2s_92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d/mariadb-account-create-update/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.218026 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_9d247f3c-18c5-4045-a6a5-e25dc78c33ee/manila-api-log/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.335244 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_9d247f3c-18c5-4045-a6a5-e25dc78c33ee/manila-api/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.344257 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-create-6d468_c1f501b4-e612-41a4-aef2-fdaf166aa018/mariadb-database-create/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.361212 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-sync-q7rxq_1501061b-c734-43b8-8f88-0d895789e209/manila-db-sync/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.454414 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_acbf90ca-14f6-4274-b63b-f4e71c1ce845/manila-scheduler/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.462027 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_acbf90ca-14f6-4274-b63b-f4e71c1ce845/probe/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.507018 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3/manila-share/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.512141 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3/probe/0.log" Jan 20 15:47:38 crc kubenswrapper[4949]: I0120 15:47:38.149065 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" event={"ID":"a01ac885-313b-4cac-ad73-abd4dd2c9f97","Type":"ContainerStarted","Data":"d816f326344f36733c35a25a41d16a2ad87b81846acd1c874e5ed61d01b8a3a4"} Jan 20 15:47:39 crc kubenswrapper[4949]: I0120 15:47:39.855442 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_485725f6-91f1-413b-89f5-21bde785bd94/memcached/0.log" Jan 20 15:47:39 crc kubenswrapper[4949]: I0120 15:47:39.896685 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b8cd78967-6cmpj_dae84f47-70ef-4a10-ae62-dae601b0de81/neutron-api/0.log" Jan 20 15:47:39 crc kubenswrapper[4949]: I0120 15:47:39.916232 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b8cd78967-6cmpj_dae84f47-70ef-4a10-ae62-dae601b0de81/neutron-httpd/0.log" Jan 20 15:47:39 crc kubenswrapper[4949]: I0120 15:47:39.944270 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf_a6c12b14-7d12-46ea-be9c-15789d700112/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.108875 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0174a61d-76ab-4198-91f1-d97291db561b/nova-api-log/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.380791 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0174a61d-76ab-4198-91f1-d97291db561b/nova-api-api/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.469308 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_432760ec-2ef6-4335-a7ba-21a2d73ede73/nova-cell0-conductor-conductor/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.574291 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e19f25ae-0920-4573-9f2e-6447ca83e76c/nova-cell1-conductor-conductor/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.663387 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_16e90cac-28e0-4d75-a613-d77c9263f634/nova-cell1-novncproxy-novncproxy/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.715134 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff_97b58b41-5a8f-47f7-af93-382d7a6f0e69/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.778475 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4185f7d0-b70a-4d49-82b9-e249bd1b2c48/nova-metadata-log/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.572307 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4185f7d0-b70a-4d49-82b9-e249bd1b2c48/nova-metadata-metadata/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.664996 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_51e2ed93-379c-457d-992a-57160c6be51a/nova-scheduler-scheduler/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.697889 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f03e93a7-24b6-499c-89bc-1bf3e67221a6/galera/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.708860 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f03e93a7-24b6-499c-89bc-1bf3e67221a6/mysql-bootstrap/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.739530 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee020527-9591-42dc-b000-3153caede9cf/galera/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.749813 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee020527-9591-42dc-b000-3153caede9cf/mysql-bootstrap/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.758258 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0b4f97ab-7425-4271-bd09-0e89073ebdc1/openstackclient/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.770381 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q26vt_f4968375-00d3-4db1-93b4-db0808c464b2/openstack-network-exporter/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.783903 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nqhh2_c4179fca-4378-4347-a519-96120d9ae1cc/ovn-controller/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.799572 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kbnxn_bce99786-819a-47cc-8ad7-0c5581f034fa/ovsdb-server/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.808887 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kbnxn_bce99786-819a-47cc-8ad7-0c5581f034fa/ovs-vswitchd/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.816445 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kbnxn_bce99786-819a-47cc-8ad7-0c5581f034fa/ovsdb-server-init/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.847256 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7j58g_eb1d8e10-2c84-4a8f-a3d0-653432297fb1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.857811 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_425d9be8-fa72-4cbe-bcc7-444e46e67a08/ovn-northd/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.865054 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_425d9be8-fa72-4cbe-bcc7-444e46e67a08/openstack-network-exporter/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.881702 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ab38c923-ec3b-400d-864a-c5e8a0d53999/ovsdbserver-nb/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.888318 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ab38c923-ec3b-400d-864a-c5e8a0d53999/openstack-network-exporter/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.910463 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_17c9cb64-1ff5-4087-b424-1c2bb7398ba0/ovsdbserver-sb/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.917297 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_17c9cb64-1ff5-4087-b424-1c2bb7398ba0/openstack-network-exporter/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.975546 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-754d6d4c8d-v7txj_69138579-1fa8-4d89-b94f-46e3424d604c/placement-log/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.010377 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-754d6d4c8d-v7txj_69138579-1fa8-4d89-b94f-46e3424d604c/placement-api/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.030992 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_81813586-eebe-4c95-ad8b-433b8c501337/rabbitmq/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.037851 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_81813586-eebe-4c95-ad8b-433b8c501337/setup-container/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.106306 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_18d74874-b8f5-4706-abfe-c8d1cb7bb21b/rabbitmq/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.112618 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_18d74874-b8f5-4706-abfe-c8d1cb7bb21b/setup-container/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.131157 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5_3b31ae29-db74-4104-b8b5-377bfa3f766a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.142506 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk_f5d6330b-b87a-476b-bebc-a790026e5dd3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.153781 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bdp7d_4d06892f-967c-4bd9-ac54-c36c80e3df73/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.174498 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6z8gd_53b63ff2-c70c-4429-99c6-759d0eb33ae9/ssh-known-hosts-edpm-deployment/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.191491 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7_cb58fe7e-6a7d-46ea-82ad-02e9200e8042/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:48 crc kubenswrapper[4949]: I0120 15:47:48.014903 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/controller/0.log" Jan 20 15:47:48 crc kubenswrapper[4949]: I0120 15:47:48.023880 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/kube-rbac-proxy/0.log" Jan 20 15:47:48 crc kubenswrapper[4949]: I0120 15:47:48.048479 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/controller/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.651449 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.667847 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/reloader/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.674319 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr-metrics/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.683014 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.694284 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy-frr/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.702255 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-frr-files/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.712555 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-reloader/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.717560 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-metrics/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.729047 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-87tfc_9787b339-5a35-4568-8ea4-12b8904efd8a/frr-k8s-webhook-server/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.747365 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7949cdb884-qwqpl_aab28d03-013d-4f55-8f5d-4452aa51ae0b/manager/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.757597 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-598fc6787c-lklkm_418359eb-1dea-4f02-9964-9ab810e3bc09/webhook-server/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.037253 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/speaker/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.041914 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/kube-rbac-proxy/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.420924 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-jzl6b_070f7ba5-a528-4316-8484-4ea82fb70a40/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.472972 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-vll8p_c44d3483-738b-4aab-a4a2-1478480b6330/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.488865 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/extract/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.498211 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/util/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.506741 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/pull/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.524695 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-vhsdx_070a47eb-d68f-4208-86eb-a99f0a9ce5df/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.578364 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-m9grk_5eae4c51-3e86-4153-8c26-d4c51b2f1331/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.592416 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-jxnlk_e60d05a5-d1d5-4959-843b-654aaf547bca/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.622416 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5vwt4_05642ba7-89bd-4d72-a31b-4e6d4532923e/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.868759 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-q5h89_c07420af-b163-4ab6-8a1c-5e697629cab0/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.879137 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-bt9wn_57182814-f19c-4247-b774-5b01afe7d680/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.969973 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-th6cb_d6706563-2c93-414e-bb49-cd74ae82d235/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.022690 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-ft9st_2dacfd0a-8e74-4eb1-b4cb-892ae16a9291/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.060906 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tj7jv_a87686a4-1af3-4d05-ac2d-15551c80e0d7/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.112489 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ljxrw_017942ba-9ec1-4474-91e5-7adb1481e807/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.192261 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-cc9zv_728be0e4-4dde-4f00-be4f-af6590d7025b/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.201065 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-g87xm_d02df557-c289-4444-b29b-917ea271a874/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.219995 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg_0e576db6-d246-4a03-a2bd-8cbd7f7526fd/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.402147 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-647bfc4c5c-8vnrj_fa13f464-1245-4c7e-ba74-47e65076c9d1/operator/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.616023 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-559d8b8b56-srtdv_ec1b1a5b-0d86-40b4-9410-397d183776d0/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.624503 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nf5l6_a06c3c7b-913e-412e-833e-fcd7df154877/registry-server/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.681128 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-f52ph_ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.709870 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-4kwz9_58fdba15-e8ba-47fa-aca8-90f638577a6b/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.731387 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pzpkv_d770793b-0e56-43cc-9707-5d062b8f7c82/operator/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.751597 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-nr2lr_db4c21b1-de25-4c17-a3c3-e6eea4044d77/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.839721 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-94wzp_dc5c569e-c0ee-44bc-bdc9-397ab5941ad5/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.846652 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-869947677f-8qg9p_63acb80f-21b4-4255-af60-03a68dd07658/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.860363 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-jc5mh_68de7d27-2202-473a-b077-d03d033244a2/manager/0.log" Jan 20 15:47:55 crc kubenswrapper[4949]: I0120 15:47:55.330822 4949 generic.go:334] "Generic (PLEG): container finished" podID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" containerID="d816f326344f36733c35a25a41d16a2ad87b81846acd1c874e5ed61d01b8a3a4" exitCode=0 Jan 20 15:47:55 crc kubenswrapper[4949]: I0120 15:47:55.330980 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" event={"ID":"a01ac885-313b-4cac-ad73-abd4dd2c9f97","Type":"ContainerDied","Data":"d816f326344f36733c35a25a41d16a2ad87b81846acd1c874e5ed61d01b8a3a4"} Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.452083 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.494470 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-j9l58"] Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.504334 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-j9l58"] Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.605600 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") pod \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.605836 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") pod \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.606646 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host" (OuterVolumeSpecName: "host") pod "a01ac885-313b-4cac-ad73-abd4dd2c9f97" (UID: "a01ac885-313b-4cac-ad73-abd4dd2c9f97"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.626670 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr" (OuterVolumeSpecName: "kube-api-access-ldwkr") pod "a01ac885-313b-4cac-ad73-abd4dd2c9f97" (UID: "a01ac885-313b-4cac-ad73-abd4dd2c9f97"). InnerVolumeSpecName "kube-api-access-ldwkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.707772 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") on node \"crc\" DevicePath \"\"" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.707803 4949 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") on node \"crc\" DevicePath \"\"" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.800666 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" path="/var/lib/kubelet/pods/a01ac885-313b-4cac-ad73-abd4dd2c9f97/volumes" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.349798 4949 scope.go:117] "RemoveContainer" containerID="d816f326344f36733c35a25a41d16a2ad87b81846acd1c874e5ed61d01b8a3a4" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.350262 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.709380 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-7wft6"] Jan 20 15:47:57 crc kubenswrapper[4949]: E0120 15:47:57.710809 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" containerName="container-00" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.710832 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" containerName="container-00" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.711021 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" containerName="container-00" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.711777 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.847153 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.847272 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.948952 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.949136 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.949450 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.980291 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.030284 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:58 crc kubenswrapper[4949]: W0120 15:47:58.061919 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b8dab70_1888_4e69_a77f_47d2287883e9.slice/crio-ecbcfcda465c52f8618d2513a288765175179d4189ff5ff54f6c029eca603174 WatchSource:0}: Error finding container ecbcfcda465c52f8618d2513a288765175179d4189ff5ff54f6c029eca603174: Status 404 returned error can't find the container with id ecbcfcda465c52f8618d2513a288765175179d4189ff5ff54f6c029eca603174 Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.215715 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d5t2m_95c38c39-62f0-4343-9628-5070d8cc10b7/control-plane-machine-set-operator/0.log" Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.230914 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tsmsl_7f69495e-a17d-4493-b598-99c2fc9afee7/kube-rbac-proxy/0.log" Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.246764 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tsmsl_7f69495e-a17d-4493-b598-99c2fc9afee7/machine-api-operator/0.log" Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.360060 4949 generic.go:334] "Generic (PLEG): container finished" podID="9b8dab70-1888-4e69-a77f-47d2287883e9" containerID="154f9aff305cda87800e628ef62fda33f3caae55a607a147ebe105cec49f54f4" exitCode=1 Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.360122 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" event={"ID":"9b8dab70-1888-4e69-a77f-47d2287883e9","Type":"ContainerDied","Data":"154f9aff305cda87800e628ef62fda33f3caae55a607a147ebe105cec49f54f4"} Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.360312 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" event={"ID":"9b8dab70-1888-4e69-a77f-47d2287883e9","Type":"ContainerStarted","Data":"ecbcfcda465c52f8618d2513a288765175179d4189ff5ff54f6c029eca603174"} Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.395399 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-7wft6"] Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.404026 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-7wft6"] Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.476644 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.577239 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") pod \"9b8dab70-1888-4e69-a77f-47d2287883e9\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.577385 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host" (OuterVolumeSpecName: "host") pod "9b8dab70-1888-4e69-a77f-47d2287883e9" (UID: "9b8dab70-1888-4e69-a77f-47d2287883e9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.577913 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") pod \"9b8dab70-1888-4e69-a77f-47d2287883e9\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.579089 4949 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") on node \"crc\" DevicePath \"\"" Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.592725 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22" (OuterVolumeSpecName: "kube-api-access-wcb22") pod "9b8dab70-1888-4e69-a77f-47d2287883e9" (UID: "9b8dab70-1888-4e69-a77f-47d2287883e9"). InnerVolumeSpecName "kube-api-access-wcb22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.680553 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") on node \"crc\" DevicePath \"\"" Jan 20 15:48:00 crc kubenswrapper[4949]: I0120 15:48:00.385153 4949 scope.go:117] "RemoveContainer" containerID="154f9aff305cda87800e628ef62fda33f3caae55a607a147ebe105cec49f54f4" Jan 20 15:48:00 crc kubenswrapper[4949]: I0120 15:48:00.385217 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:48:00 crc kubenswrapper[4949]: I0120 15:48:00.800928 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8dab70-1888-4e69-a77f-47d2287883e9" path="/var/lib/kubelet/pods/9b8dab70-1888-4e69-a77f-47d2287883e9/volumes" Jan 20 15:48:53 crc kubenswrapper[4949]: I0120 15:48:53.225257 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-k9xq5_1ca44809-a121-411d-8be6-f1a8b879b97f/cert-manager-controller/0.log" Jan 20 15:48:53 crc kubenswrapper[4949]: I0120 15:48:53.238811 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9x9js_9cd775b9-2d07-40bb-964c-6e935aa6775a/cert-manager-cainjector/0.log" Jan 20 15:48:53 crc kubenswrapper[4949]: I0120 15:48:53.250686 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wdg2b_512fc928-abb3-4353-9543-be5d35cd8ccd/cert-manager-webhook/0.log" Jan 20 15:48:57 crc kubenswrapper[4949]: I0120 15:48:57.152639 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:48:57 crc kubenswrapper[4949]: I0120 15:48:57.153257 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.009879 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-vt2ng_7a366383-883e-4f7e-b656-d23eb0fe6294/nmstate-console-plugin/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.030983 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ndwpd_248f6a09-0064-4d9f-a4d7-13a92b06ee72/nmstate-handler/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.042035 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bz62x_696be671-724b-4447-ba02-730dd10fc489/nmstate-metrics/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.049254 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bz62x_696be671-724b-4447-ba02-730dd10fc489/kube-rbac-proxy/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.064910 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jsrwb_b2bfb1bf-1717-4d51-9632-204856f869f4/nmstate-operator/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.074009 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-twsz5_71837cd3-c24a-4d86-b59f-28330f7d2809/nmstate-webhook/0.log" Jan 20 15:49:11 crc kubenswrapper[4949]: I0120 15:49:11.177506 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/controller/0.log" Jan 20 15:49:11 crc kubenswrapper[4949]: I0120 15:49:11.185684 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/kube-rbac-proxy/0.log" Jan 20 15:49:11 crc kubenswrapper[4949]: I0120 15:49:11.210859 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/controller/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.434953 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.446000 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/reloader/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.450986 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr-metrics/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.464736 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.476231 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy-frr/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.486695 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-frr-files/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.501187 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-reloader/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.511458 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-metrics/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.525085 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-87tfc_9787b339-5a35-4568-8ea4-12b8904efd8a/frr-k8s-webhook-server/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.550962 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7949cdb884-qwqpl_aab28d03-013d-4f55-8f5d-4452aa51ae0b/manager/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.561666 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-598fc6787c-lklkm_418359eb-1dea-4f02-9964-9ab810e3bc09/webhook-server/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.898876 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/speaker/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.909533 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/kube-rbac-proxy/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.437229 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr_21202f95-d312-47b4-988f-4cd0a9dac08e/extract/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.445860 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr_21202f95-d312-47b4-988f-4cd0a9dac08e/util/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.455203 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr_21202f95-d312-47b4-988f-4cd0a9dac08e/pull/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.483330 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk_3f63e0ce-f0ce-434d-b9f5-b0695dba0b06/extract/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.498266 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk_3f63e0ce-f0ce-434d-b9f5-b0695dba0b06/util/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.507881 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk_3f63e0ce-f0ce-434d-b9f5-b0695dba0b06/pull/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.897381 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kmnv_a55010bf-14fe-4c92-8fe4-d2864bf74ad1/registry-server/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.902669 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kmnv_a55010bf-14fe-4c92-8fe4-d2864bf74ad1/extract-utilities/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.909949 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kmnv_a55010bf-14fe-4c92-8fe4-d2864bf74ad1/extract-content/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.352445 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xr695_090c2072-966d-4848-82fc-c9aecee3d6c8/registry-server/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.368821 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xr695_090c2072-966d-4848-82fc-c9aecee3d6c8/extract-utilities/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.381778 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xr695_090c2072-966d-4848-82fc-c9aecee3d6c8/extract-content/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.411684 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cnrps_e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74/marketplace-operator/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.549729 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-97g5q_3f68902a-0bee-45a6-96c4-b4a80feaba0b/registry-server/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.554959 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-97g5q_3f68902a-0bee-45a6-96c4-b4a80feaba0b/extract-utilities/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.563272 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-97g5q_3f68902a-0bee-45a6-96c4-b4a80feaba0b/extract-content/0.log" Jan 20 15:49:19 crc kubenswrapper[4949]: I0120 15:49:19.016842 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cmxfz_983905b2-cefb-487e-887f-630d669af9ec/registry-server/0.log" Jan 20 15:49:19 crc kubenswrapper[4949]: I0120 15:49:19.022128 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cmxfz_983905b2-cefb-487e-887f-630d669af9ec/extract-utilities/0.log" Jan 20 15:49:19 crc kubenswrapper[4949]: I0120 15:49:19.031482 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cmxfz_983905b2-cefb-487e-887f-630d669af9ec/extract-content/0.log" Jan 20 15:49:27 crc kubenswrapper[4949]: I0120 15:49:27.152383 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:49:27 crc kubenswrapper[4949]: I0120 15:49:27.153003 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.176499 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:34 crc kubenswrapper[4949]: E0120 15:49:34.177466 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8dab70-1888-4e69-a77f-47d2287883e9" containerName="container-00" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.177478 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8dab70-1888-4e69-a77f-47d2287883e9" containerName="container-00" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.177676 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8dab70-1888-4e69-a77f-47d2287883e9" containerName="container-00" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.182969 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.206349 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.338580 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.338822 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.338872 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.440957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.441271 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.441431 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.441606 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.442059 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.466550 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.510062 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:35 crc kubenswrapper[4949]: I0120 15:49:35.048353 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:35 crc kubenswrapper[4949]: I0120 15:49:35.270505 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerStarted","Data":"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2"} Jan 20 15:49:35 crc kubenswrapper[4949]: I0120 15:49:35.270607 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerStarted","Data":"ebb54275831517bf6b265a69209886896dfc2a12fa90ea84e0ba4368c53095d4"} Jan 20 15:49:36 crc kubenswrapper[4949]: I0120 15:49:36.285058 4949 generic.go:334] "Generic (PLEG): container finished" podID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerID="2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2" exitCode=0 Jan 20 15:49:36 crc kubenswrapper[4949]: I0120 15:49:36.285245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerDied","Data":"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2"} Jan 20 15:49:38 crc kubenswrapper[4949]: I0120 15:49:38.303847 4949 generic.go:334] "Generic (PLEG): container finished" podID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerID="f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519" exitCode=0 Jan 20 15:49:38 crc kubenswrapper[4949]: I0120 15:49:38.304393 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerDied","Data":"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519"} Jan 20 15:49:39 crc kubenswrapper[4949]: I0120 15:49:39.316407 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerStarted","Data":"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314"} Jan 20 15:49:44 crc kubenswrapper[4949]: I0120 15:49:44.510724 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:44 crc kubenswrapper[4949]: I0120 15:49:44.511588 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:44 crc kubenswrapper[4949]: I0120 15:49:44.570289 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:44 crc kubenswrapper[4949]: I0120 15:49:44.590271 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t5rhv" podStartSLOduration=8.149500967 podStartE2EDuration="10.590251412s" podCreationTimestamp="2026-01-20 15:49:34 +0000 UTC" firstStartedPulling="2026-01-20 15:49:36.287462668 +0000 UTC m=+3572.097293526" lastFinishedPulling="2026-01-20 15:49:38.728213113 +0000 UTC m=+3574.538043971" observedRunningTime="2026-01-20 15:49:39.331055364 +0000 UTC m=+3575.140886222" watchObservedRunningTime="2026-01-20 15:49:44.590251412 +0000 UTC m=+3580.400082260" Jan 20 15:49:45 crc kubenswrapper[4949]: I0120 15:49:45.410651 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:45 crc kubenswrapper[4949]: I0120 15:49:45.465948 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.377058 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t5rhv" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="registry-server" containerID="cri-o://5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" gracePeriod=2 Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.881431 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.922281 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") pod \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.922655 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") pod \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.922733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") pod \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.923947 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities" (OuterVolumeSpecName: "utilities") pod "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" (UID: "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.933729 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp" (OuterVolumeSpecName: "kube-api-access-bq5fp") pod "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" (UID: "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52"). InnerVolumeSpecName "kube-api-access-bq5fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.026463 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.026535 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") on node \"crc\" DevicePath \"\"" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.184661 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" (UID: "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.231981 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.387972 4949 generic.go:334] "Generic (PLEG): container finished" podID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerID="5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" exitCode=0 Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.388016 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.388015 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerDied","Data":"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314"} Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.388161 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerDied","Data":"ebb54275831517bf6b265a69209886896dfc2a12fa90ea84e0ba4368c53095d4"} Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.388188 4949 scope.go:117] "RemoveContainer" containerID="5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.409362 4949 scope.go:117] "RemoveContainer" containerID="f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.425718 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.439453 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.441682 4949 scope.go:117] "RemoveContainer" containerID="2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.484039 4949 scope.go:117] "RemoveContainer" containerID="5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" Jan 20 15:49:48 crc kubenswrapper[4949]: E0120 15:49:48.487097 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314\": container with ID starting with 5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314 not found: ID does not exist" containerID="5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487155 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314"} err="failed to get container status \"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314\": rpc error: code = NotFound desc = could not find container \"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314\": container with ID starting with 5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314 not found: ID does not exist" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487184 4949 scope.go:117] "RemoveContainer" containerID="f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519" Jan 20 15:49:48 crc kubenswrapper[4949]: E0120 15:49:48.487422 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519\": container with ID starting with f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519 not found: ID does not exist" containerID="f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487442 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519"} err="failed to get container status \"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519\": rpc error: code = NotFound desc = could not find container \"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519\": container with ID starting with f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519 not found: ID does not exist" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487455 4949 scope.go:117] "RemoveContainer" containerID="2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2" Jan 20 15:49:48 crc kubenswrapper[4949]: E0120 15:49:48.487649 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2\": container with ID starting with 2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2 not found: ID does not exist" containerID="2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487690 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2"} err="failed to get container status \"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2\": rpc error: code = NotFound desc = could not find container \"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2\": container with ID starting with 2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2 not found: ID does not exist" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.800274 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" path="/var/lib/kubelet/pods/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52/volumes" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.152678 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.153125 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.153177 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.154052 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.154097 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" gracePeriod=600 Jan 20 15:49:57 crc kubenswrapper[4949]: E0120 15:49:57.286083 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.460040 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" exitCode=0 Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.460082 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf"} Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.460115 4949 scope.go:117] "RemoveContainer" containerID="1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.460769 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:49:57 crc kubenswrapper[4949]: E0120 15:49:57.460995 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:12 crc kubenswrapper[4949]: I0120 15:50:12.789576 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:12 crc kubenswrapper[4949]: E0120 15:50:12.790581 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:23 crc kubenswrapper[4949]: I0120 15:50:23.788847 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:23 crc kubenswrapper[4949]: E0120 15:50:23.789618 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.608231 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:28 crc kubenswrapper[4949]: E0120 15:50:28.609060 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="extract-utilities" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.609072 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="extract-utilities" Jan 20 15:50:28 crc kubenswrapper[4949]: E0120 15:50:28.609095 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="registry-server" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.609104 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="registry-server" Jan 20 15:50:28 crc kubenswrapper[4949]: E0120 15:50:28.609121 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="extract-content" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.609127 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="extract-content" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.609304 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="registry-server" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.610554 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.622469 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.776539 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.776595 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.776650 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.878518 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.878574 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.878609 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.879231 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.879317 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.898947 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.931260 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.426812 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.713798 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd358256-8547-497c-b550-c67a395e34a5" containerID="fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c" exitCode=0 Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.713863 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerDied","Data":"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c"} Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.714066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerStarted","Data":"9da8f2722c3a4ae2a71f48976d74b6f30505d129cc5c03322698b370190492b5"} Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.716330 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:50:31 crc kubenswrapper[4949]: I0120 15:50:31.731419 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerStarted","Data":"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80"} Jan 20 15:50:32 crc kubenswrapper[4949]: I0120 15:50:32.740248 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd358256-8547-497c-b550-c67a395e34a5" containerID="6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80" exitCode=0 Jan 20 15:50:32 crc kubenswrapper[4949]: I0120 15:50:32.740291 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerDied","Data":"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80"} Jan 20 15:50:33 crc kubenswrapper[4949]: I0120 15:50:33.761683 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerStarted","Data":"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79"} Jan 20 15:50:33 crc kubenswrapper[4949]: I0120 15:50:33.802810 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5btxz" podStartSLOduration=2.269922445 podStartE2EDuration="5.802793176s" podCreationTimestamp="2026-01-20 15:50:28 +0000 UTC" firstStartedPulling="2026-01-20 15:50:29.715980638 +0000 UTC m=+3625.525811496" lastFinishedPulling="2026-01-20 15:50:33.248851369 +0000 UTC m=+3629.058682227" observedRunningTime="2026-01-20 15:50:33.792642667 +0000 UTC m=+3629.602473515" watchObservedRunningTime="2026-01-20 15:50:33.802793176 +0000 UTC m=+3629.612624024" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.496461 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/controller/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.506651 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/kube-rbac-proxy/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.524387 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/controller/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.628421 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-k9xq5_1ca44809-a121-411d-8be6-f1a8b879b97f/cert-manager-controller/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.646412 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9x9js_9cd775b9-2d07-40bb-964c-6e935aa6775a/cert-manager-cainjector/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.657427 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wdg2b_512fc928-abb3-4353-9543-be5d35cd8ccd/cert-manager-webhook/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.774944 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.786374 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/reloader/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.789442 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:35 crc kubenswrapper[4949]: E0120 15:50:35.790875 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.791720 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr-metrics/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.799692 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.808036 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy-frr/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.820672 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-frr-files/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.828904 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-reloader/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.836578 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-metrics/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.845998 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-87tfc_9787b339-5a35-4568-8ea4-12b8904efd8a/frr-k8s-webhook-server/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.871809 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7949cdb884-qwqpl_aab28d03-013d-4f55-8f5d-4452aa51ae0b/manager/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.889475 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-598fc6787c-lklkm_418359eb-1dea-4f02-9964-9ab810e3bc09/webhook-server/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.985839 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-jzl6b_070f7ba5-a528-4316-8484-4ea82fb70a40/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.078922 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-vll8p_c44d3483-738b-4aab-a4a2-1478480b6330/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.088247 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/extract/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.096954 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/util/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.103582 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/pull/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.131301 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-vhsdx_070a47eb-d68f-4208-86eb-a99f0a9ce5df/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.280987 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-m9grk_5eae4c51-3e86-4153-8c26-d4c51b2f1331/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.297510 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-jxnlk_e60d05a5-d1d5-4959-843b-654aaf547bca/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.354437 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5vwt4_05642ba7-89bd-4d72-a31b-4e6d4532923e/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.382684 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/speaker/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.393693 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/kube-rbac-proxy/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.604548 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-q5h89_c07420af-b163-4ab6-8a1c-5e697629cab0/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.620665 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-bt9wn_57182814-f19c-4247-b774-5b01afe7d680/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.686535 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-th6cb_d6706563-2c93-414e-bb49-cd74ae82d235/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.728152 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-ft9st_2dacfd0a-8e74-4eb1-b4cb-892ae16a9291/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.761599 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tj7jv_a87686a4-1af3-4d05-ac2d-15551c80e0d7/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.804750 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ljxrw_017942ba-9ec1-4474-91e5-7adb1481e807/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.872673 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-cc9zv_728be0e4-4dde-4f00-be4f-af6590d7025b/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.887742 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-g87xm_d02df557-c289-4444-b29b-917ea271a874/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.902400 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg_0e576db6-d246-4a03-a2bd-8cbd7f7526fd/manager/0.log" Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.032595 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-647bfc4c5c-8vnrj_fa13f464-1245-4c7e-ba74-47e65076c9d1/operator/0.log" Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.045536 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.060000 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.078873 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.100108 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.457800 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-k9xq5_1ca44809-a121-411d-8be6-f1a8b879b97f/cert-manager-controller/0.log" Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.511605 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9x9js_9cd775b9-2d07-40bb-964c-6e935aa6775a/cert-manager-cainjector/0.log" Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.525175 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wdg2b_512fc928-abb3-4353-9543-be5d35cd8ccd/cert-manager-webhook/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.248298 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d5t2m_95c38c39-62f0-4343-9628-5070d8cc10b7/control-plane-machine-set-operator/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.260743 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tsmsl_7f69495e-a17d-4493-b598-99c2fc9afee7/kube-rbac-proxy/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.269152 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tsmsl_7f69495e-a17d-4493-b598-99c2fc9afee7/machine-api-operator/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.393670 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-559d8b8b56-srtdv_ec1b1a5b-0d86-40b4-9410-397d183776d0/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.406831 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nf5l6_a06c3c7b-913e-412e-833e-fcd7df154877/registry-server/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.454615 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-f52ph_ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.480271 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-4kwz9_58fdba15-e8ba-47fa-aca8-90f638577a6b/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.498547 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pzpkv_d770793b-0e56-43cc-9707-5d062b8f7c82/operator/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.511164 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-nr2lr_db4c21b1-de25-4c17-a3c3-e6eea4044d77/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.584721 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-94wzp_dc5c569e-c0ee-44bc-bdc9-397ab5941ad5/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.596591 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-869947677f-8qg9p_63acb80f-21b4-4255-af60-03a68dd07658/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.608180 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-jc5mh_68de7d27-2202-473a-b077-d03d033244a2/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.798317 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" path="/var/lib/kubelet/pods/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d/volumes" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.798910 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f501b4-e612-41a4-aef2-fdaf166aa018" path="/var/lib/kubelet/pods/c1f501b4-e612-41a4-aef2-fdaf166aa018/volumes" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.932218 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.932269 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.362295 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-jzl6b_070f7ba5-a528-4316-8484-4ea82fb70a40/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.420623 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-vll8p_c44d3483-738b-4aab-a4a2-1478480b6330/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.429064 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/extract/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.435363 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/util/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.443140 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/pull/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.454285 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-vhsdx_070a47eb-d68f-4208-86eb-a99f0a9ce5df/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.541852 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-m9grk_5eae4c51-3e86-4153-8c26-d4c51b2f1331/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.558050 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-jxnlk_e60d05a5-d1d5-4959-843b-654aaf547bca/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.588482 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5vwt4_05642ba7-89bd-4d72-a31b-4e6d4532923e/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.931967 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-q5h89_c07420af-b163-4ab6-8a1c-5e697629cab0/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.944338 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-bt9wn_57182814-f19c-4247-b774-5b01afe7d680/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.983554 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5btxz" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" probeResult="failure" output=< Jan 20 15:50:39 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:50:39 crc kubenswrapper[4949]: > Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.012768 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-th6cb_d6706563-2c93-414e-bb49-cd74ae82d235/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.055397 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-ft9st_2dacfd0a-8e74-4eb1-b4cb-892ae16a9291/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.072241 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-vt2ng_7a366383-883e-4f7e-b656-d23eb0fe6294/nmstate-console-plugin/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.087327 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ndwpd_248f6a09-0064-4d9f-a4d7-13a92b06ee72/nmstate-handler/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.088856 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tj7jv_a87686a4-1af3-4d05-ac2d-15551c80e0d7/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.109640 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bz62x_696be671-724b-4447-ba02-730dd10fc489/nmstate-metrics/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.119423 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bz62x_696be671-724b-4447-ba02-730dd10fc489/kube-rbac-proxy/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.130060 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ljxrw_017942ba-9ec1-4474-91e5-7adb1481e807/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.132692 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jsrwb_b2bfb1bf-1717-4d51-9632-204856f869f4/nmstate-operator/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.148629 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-twsz5_71837cd3-c24a-4d86-b59f-28330f7d2809/nmstate-webhook/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.208581 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-cc9zv_728be0e4-4dde-4f00-be4f-af6590d7025b/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.218525 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-g87xm_d02df557-c289-4444-b29b-917ea271a874/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.233929 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg_0e576db6-d246-4a03-a2bd-8cbd7f7526fd/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.359621 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-647bfc4c5c-8vnrj_fa13f464-1245-4c7e-ba74-47e65076c9d1/operator/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.623215 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-559d8b8b56-srtdv_ec1b1a5b-0d86-40b4-9410-397d183776d0/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.635146 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nf5l6_a06c3c7b-913e-412e-833e-fcd7df154877/registry-server/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.669404 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-f52ph_ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.713877 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-4kwz9_58fdba15-e8ba-47fa-aca8-90f638577a6b/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.737021 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pzpkv_d770793b-0e56-43cc-9707-5d062b8f7c82/operator/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.747732 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-nr2lr_db4c21b1-de25-4c17-a3c3-e6eea4044d77/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.831119 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-94wzp_dc5c569e-c0ee-44bc-bdc9-397ab5941ad5/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.841616 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-869947677f-8qg9p_63acb80f-21b4-4255-af60-03a68dd07658/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.852094 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-jc5mh_68de7d27-2202-473a-b077-d03d033244a2/manager/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.208317 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/2.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.301882 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/3.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.315366 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/kube-multus-additional-cni-plugins/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.325563 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/egress-router-binary-copy/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.333628 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/cni-plugins/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.340434 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/bond-cni-plugin/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.348946 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/routeoverride-cni/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.357675 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/whereabouts-cni-bincopy/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.365935 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/whereabouts-cni/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.406751 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-zvfr4_c47ecb6d-9ecf-480f-b605-4dd91e900521/multus-admission-controller/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.411907 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-zvfr4_c47ecb6d-9ecf-480f-b605-4dd91e900521/kube-rbac-proxy/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.445544 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-hlfls_fa4eae9d-b492-4fd3-8baf-38ed726d9e4c/network-metrics-daemon/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.450853 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-hlfls_fa4eae9d-b492-4fd3-8baf-38ed726d9e4c/kube-rbac-proxy/0.log" Jan 20 15:50:46 crc kubenswrapper[4949]: I0120 15:50:46.793775 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:46 crc kubenswrapper[4949]: E0120 15:50:46.794589 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:49 crc kubenswrapper[4949]: I0120 15:50:48.999226 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:49 crc kubenswrapper[4949]: I0120 15:50:49.070257 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:49 crc kubenswrapper[4949]: I0120 15:50:49.247798 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:50 crc kubenswrapper[4949]: I0120 15:50:50.918530 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5btxz" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" containerID="cri-o://5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" gracePeriod=2 Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.406569 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.426355 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") pod \"cd358256-8547-497c-b550-c67a395e34a5\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.426513 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") pod \"cd358256-8547-497c-b550-c67a395e34a5\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.429552 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities" (OuterVolumeSpecName: "utilities") pod "cd358256-8547-497c-b550-c67a395e34a5" (UID: "cd358256-8547-497c-b550-c67a395e34a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.456866 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t" (OuterVolumeSpecName: "kube-api-access-z969t") pod "cd358256-8547-497c-b550-c67a395e34a5" (UID: "cd358256-8547-497c-b550-c67a395e34a5"). InnerVolumeSpecName "kube-api-access-z969t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.530155 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") pod \"cd358256-8547-497c-b550-c67a395e34a5\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.531230 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.531263 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") on node \"crc\" DevicePath \"\"" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.692827 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd358256-8547-497c-b550-c67a395e34a5" (UID: "cd358256-8547-497c-b550-c67a395e34a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.780073 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.930923 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd358256-8547-497c-b550-c67a395e34a5" containerID="5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" exitCode=0 Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.930984 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.930986 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerDied","Data":"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79"} Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.931040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerDied","Data":"9da8f2722c3a4ae2a71f48976d74b6f30505d129cc5c03322698b370190492b5"} Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.931068 4949 scope.go:117] "RemoveContainer" containerID="5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.975173 4949 scope.go:117] "RemoveContainer" containerID="6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.982581 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.993354 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.014019 4949 scope.go:117] "RemoveContainer" containerID="fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.074330 4949 scope.go:117] "RemoveContainer" containerID="5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" Jan 20 15:50:52 crc kubenswrapper[4949]: E0120 15:50:52.075000 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79\": container with ID starting with 5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79 not found: ID does not exist" containerID="5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.075037 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79"} err="failed to get container status \"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79\": rpc error: code = NotFound desc = could not find container \"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79\": container with ID starting with 5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79 not found: ID does not exist" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.075064 4949 scope.go:117] "RemoveContainer" containerID="6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80" Jan 20 15:50:52 crc kubenswrapper[4949]: E0120 15:50:52.075421 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80\": container with ID starting with 6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80 not found: ID does not exist" containerID="6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.075672 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80"} err="failed to get container status \"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80\": rpc error: code = NotFound desc = could not find container \"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80\": container with ID starting with 6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80 not found: ID does not exist" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.075697 4949 scope.go:117] "RemoveContainer" containerID="fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c" Jan 20 15:50:52 crc kubenswrapper[4949]: E0120 15:50:52.076019 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c\": container with ID starting with fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c not found: ID does not exist" containerID="fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.076044 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c"} err="failed to get container status \"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c\": rpc error: code = NotFound desc = could not find container \"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c\": container with ID starting with fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c not found: ID does not exist" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.810195 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd358256-8547-497c-b550-c67a395e34a5" path="/var/lib/kubelet/pods/cd358256-8547-497c-b550-c67a395e34a5/volumes" Jan 20 15:50:57 crc kubenswrapper[4949]: I0120 15:50:57.027951 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:50:57 crc kubenswrapper[4949]: I0120 15:50:57.035729 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:50:57 crc kubenswrapper[4949]: I0120 15:50:57.788660 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:57 crc kubenswrapper[4949]: E0120 15:50:57.789017 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:58 crc kubenswrapper[4949]: I0120 15:50:58.809829 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1501061b-c734-43b8-8f88-0d895789e209" path="/var/lib/kubelet/pods/1501061b-c734-43b8-8f88-0d895789e209/volumes" Jan 20 15:51:08 crc kubenswrapper[4949]: I0120 15:51:08.789160 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:51:08 crc kubenswrapper[4949]: E0120 15:51:08.790127 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:51:19 crc kubenswrapper[4949]: I0120 15:51:19.136732 4949 scope.go:117] "RemoveContainer" containerID="d1185fba5ac50e378c845b742be91f772d83fecbf8e284d8d6c3788d93e191be" Jan 20 15:51:19 crc kubenswrapper[4949]: I0120 15:51:19.187398 4949 scope.go:117] "RemoveContainer" containerID="7534ab81bdd16531f6d8d067d32c880d228b71ea42d0b7232ec112812a44a89c" Jan 20 15:51:19 crc kubenswrapper[4949]: I0120 15:51:19.248189 4949 scope.go:117] "RemoveContainer" containerID="e624f35bb39ec45aadfacd65516b3a22eeef144c564f72b34f72f2c1e14f8fe5" Jan 20 15:51:20 crc kubenswrapper[4949]: I0120 15:51:20.789580 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:51:20 crc kubenswrapper[4949]: E0120 15:51:20.790210 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:51:35 crc kubenswrapper[4949]: I0120 15:51:35.789953 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:51:35 crc kubenswrapper[4949]: E0120 15:51:35.791136 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:51:49 crc kubenswrapper[4949]: I0120 15:51:49.789861 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:51:49 crc kubenswrapper[4949]: E0120 15:51:49.790737 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:04 crc kubenswrapper[4949]: I0120 15:52:04.804461 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:04 crc kubenswrapper[4949]: E0120 15:52:04.805484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:15 crc kubenswrapper[4949]: I0120 15:52:15.789883 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:15 crc kubenswrapper[4949]: E0120 15:52:15.790757 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:30 crc kubenswrapper[4949]: I0120 15:52:30.791241 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:30 crc kubenswrapper[4949]: E0120 15:52:30.792229 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:43 crc kubenswrapper[4949]: I0120 15:52:43.789923 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:43 crc kubenswrapper[4949]: E0120 15:52:43.790968 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:58 crc kubenswrapper[4949]: I0120 15:52:58.790645 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:58 crc kubenswrapper[4949]: E0120 15:52:58.791485 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:53:09 crc kubenswrapper[4949]: I0120 15:53:09.789188 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:53:09 crc kubenswrapper[4949]: E0120 15:53:09.790177 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:53:23 crc kubenswrapper[4949]: I0120 15:53:23.790371 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:53:23 crc kubenswrapper[4949]: E0120 15:53:23.791881 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:53:37 crc kubenswrapper[4949]: I0120 15:53:37.791654 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:53:37 crc kubenswrapper[4949]: E0120 15:53:37.792754 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:53:51 crc kubenswrapper[4949]: I0120 15:53:51.789701 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:53:51 crc kubenswrapper[4949]: E0120 15:53:51.791231 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:02 crc kubenswrapper[4949]: I0120 15:54:02.789469 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:02 crc kubenswrapper[4949]: E0120 15:54:02.790288 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:14 crc kubenswrapper[4949]: I0120 15:54:14.801319 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:14 crc kubenswrapper[4949]: E0120 15:54:14.802215 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:27 crc kubenswrapper[4949]: I0120 15:54:27.789160 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:27 crc kubenswrapper[4949]: E0120 15:54:27.791109 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:38 crc kubenswrapper[4949]: I0120 15:54:38.807656 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:38 crc kubenswrapper[4949]: E0120 15:54:38.808918 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:51 crc kubenswrapper[4949]: I0120 15:54:51.789332 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:51 crc kubenswrapper[4949]: E0120 15:54:51.790011 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:55:05 crc kubenswrapper[4949]: I0120 15:55:05.789430 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:55:06 crc kubenswrapper[4949]: I0120 15:55:06.605739 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532"} Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.759599 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:44 crc kubenswrapper[4949]: E0120 15:55:44.761802 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="extract-content" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.761888 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="extract-content" Jan 20 15:55:44 crc kubenswrapper[4949]: E0120 15:55:44.761949 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="extract-utilities" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.762009 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="extract-utilities" Jan 20 15:55:44 crc kubenswrapper[4949]: E0120 15:55:44.762075 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.762126 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.762358 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.763944 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.787499 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.940331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.940799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.941047 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.042875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.043090 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.043160 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.043445 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.043464 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.073154 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.087693 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.609317 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:46 crc kubenswrapper[4949]: I0120 15:55:46.000874 4949 generic.go:334] "Generic (PLEG): container finished" podID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerID="bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7" exitCode=0 Jan 20 15:55:46 crc kubenswrapper[4949]: I0120 15:55:46.000953 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerDied","Data":"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7"} Jan 20 15:55:46 crc kubenswrapper[4949]: I0120 15:55:46.001233 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerStarted","Data":"1f99c44e4ba47114f5fcc23dea1a7e5e90cc07eb1053004959a4b030db8bea3f"} Jan 20 15:55:46 crc kubenswrapper[4949]: I0120 15:55:46.004782 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:55:47 crc kubenswrapper[4949]: I0120 15:55:47.025144 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerStarted","Data":"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34"} Jan 20 15:55:48 crc kubenswrapper[4949]: I0120 15:55:48.038128 4949 generic.go:334] "Generic (PLEG): container finished" podID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerID="d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34" exitCode=0 Jan 20 15:55:48 crc kubenswrapper[4949]: I0120 15:55:48.038431 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerDied","Data":"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34"} Jan 20 15:55:49 crc kubenswrapper[4949]: I0120 15:55:49.054319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerStarted","Data":"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f"} Jan 20 15:55:49 crc kubenswrapper[4949]: I0120 15:55:49.082944 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9hq6" podStartSLOduration=2.574477431 podStartE2EDuration="5.082918576s" podCreationTimestamp="2026-01-20 15:55:44 +0000 UTC" firstStartedPulling="2026-01-20 15:55:46.004422596 +0000 UTC m=+3941.814253464" lastFinishedPulling="2026-01-20 15:55:48.512863711 +0000 UTC m=+3944.322694609" observedRunningTime="2026-01-20 15:55:49.075454902 +0000 UTC m=+3944.885285780" watchObservedRunningTime="2026-01-20 15:55:49.082918576 +0000 UTC m=+3944.892749434" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.088488 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.089166 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.146720 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.213618 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.385814 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:57 crc kubenswrapper[4949]: I0120 15:55:57.138455 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9hq6" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="registry-server" containerID="cri-o://f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" gracePeriod=2 Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.131611 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155733 4949 generic.go:334] "Generic (PLEG): container finished" podID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerID="f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" exitCode=0 Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerDied","Data":"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f"} Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155813 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerDied","Data":"1f99c44e4ba47114f5fcc23dea1a7e5e90cc07eb1053004959a4b030db8bea3f"} Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155832 4949 scope.go:117] "RemoveContainer" containerID="f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155986 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.187961 4949 scope.go:117] "RemoveContainer" containerID="d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.207410 4949 scope.go:117] "RemoveContainer" containerID="bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.219708 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") pod \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.219897 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") pod \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.220033 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") pod \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.220826 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities" (OuterVolumeSpecName: "utilities") pod "ca270d2a-2bc4-49ee-ac79-58d0206557c1" (UID: "ca270d2a-2bc4-49ee-ac79-58d0206557c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.226811 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r" (OuterVolumeSpecName: "kube-api-access-xsl4r") pod "ca270d2a-2bc4-49ee-ac79-58d0206557c1" (UID: "ca270d2a-2bc4-49ee-ac79-58d0206557c1"). InnerVolumeSpecName "kube-api-access-xsl4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.245439 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca270d2a-2bc4-49ee-ac79-58d0206557c1" (UID: "ca270d2a-2bc4-49ee-ac79-58d0206557c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.302015 4949 scope.go:117] "RemoveContainer" containerID="f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" Jan 20 15:55:58 crc kubenswrapper[4949]: E0120 15:55:58.302581 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f\": container with ID starting with f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f not found: ID does not exist" containerID="f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.302693 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f"} err="failed to get container status \"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f\": rpc error: code = NotFound desc = could not find container \"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f\": container with ID starting with f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f not found: ID does not exist" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.302748 4949 scope.go:117] "RemoveContainer" containerID="d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34" Jan 20 15:55:58 crc kubenswrapper[4949]: E0120 15:55:58.303402 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34\": container with ID starting with d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34 not found: ID does not exist" containerID="d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.303432 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34"} err="failed to get container status \"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34\": rpc error: code = NotFound desc = could not find container \"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34\": container with ID starting with d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34 not found: ID does not exist" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.303455 4949 scope.go:117] "RemoveContainer" containerID="bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7" Jan 20 15:55:58 crc kubenswrapper[4949]: E0120 15:55:58.303832 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7\": container with ID starting with bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7 not found: ID does not exist" containerID="bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.303852 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7"} err="failed to get container status \"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7\": rpc error: code = NotFound desc = could not find container \"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7\": container with ID starting with bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7 not found: ID does not exist" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.322616 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") on node \"crc\" DevicePath \"\"" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.322668 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.322687 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.499291 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.512286 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.801569 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" path="/var/lib/kubelet/pods/ca270d2a-2bc4-49ee-ac79-58d0206557c1/volumes" Jan 20 15:57:27 crc kubenswrapper[4949]: I0120 15:57:27.164046 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:57:27 crc kubenswrapper[4949]: I0120 15:57:27.164694 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:57:32 crc kubenswrapper[4949]: I0120 15:57:32.173827 4949 generic.go:334] "Generic (PLEG): container finished" podID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" exitCode=0 Jan 20 15:57:32 crc kubenswrapper[4949]: I0120 15:57:32.173883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/must-gather-ccspq" event={"ID":"3cf0a23e-747e-442b-b15a-d9db29607be8","Type":"ContainerDied","Data":"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f"} Jan 20 15:57:32 crc kubenswrapper[4949]: I0120 15:57:32.175832 4949 scope.go:117] "RemoveContainer" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" Jan 20 15:57:32 crc kubenswrapper[4949]: I0120 15:57:32.708852 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kk8nn_must-gather-ccspq_3cf0a23e-747e-442b-b15a-d9db29607be8/gather/0.log" Jan 20 15:57:34 crc kubenswrapper[4949]: E0120 15:57:34.445427 4949 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.41:53902->38.102.83.41:36705: write tcp 38.102.83.41:53902->38.102.83.41:36705: write: connection reset by peer Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.533317 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.534426 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kk8nn/must-gather-ccspq" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="copy" containerID="cri-o://e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" gracePeriod=2 Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.545798 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.994879 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kk8nn_must-gather-ccspq_3cf0a23e-747e-442b-b15a-d9db29607be8/copy/0.log" Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.995461 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.132888 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") pod \"3cf0a23e-747e-442b-b15a-d9db29607be8\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.133113 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") pod \"3cf0a23e-747e-442b-b15a-d9db29607be8\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.140446 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn" (OuterVolumeSpecName: "kube-api-access-5mjpn") pod "3cf0a23e-747e-442b-b15a-d9db29607be8" (UID: "3cf0a23e-747e-442b-b15a-d9db29607be8"). InnerVolumeSpecName "kube-api-access-5mjpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.235588 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") on node \"crc\" DevicePath \"\"" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.277962 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kk8nn_must-gather-ccspq_3cf0a23e-747e-442b-b15a-d9db29607be8/copy/0.log" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.278692 4949 generic.go:334] "Generic (PLEG): container finished" podID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerID="e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" exitCode=143 Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.278747 4949 scope.go:117] "RemoveContainer" containerID="e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.278749 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.301835 4949 scope.go:117] "RemoveContainer" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.331860 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3cf0a23e-747e-442b-b15a-d9db29607be8" (UID: "3cf0a23e-747e-442b-b15a-d9db29607be8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.337220 4949 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.340360 4949 scope.go:117] "RemoveContainer" containerID="e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" Jan 20 15:57:41 crc kubenswrapper[4949]: E0120 15:57:41.340951 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f\": container with ID starting with e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f not found: ID does not exist" containerID="e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.340994 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f"} err="failed to get container status \"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f\": rpc error: code = NotFound desc = could not find container \"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f\": container with ID starting with e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f not found: ID does not exist" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.341136 4949 scope.go:117] "RemoveContainer" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" Jan 20 15:57:41 crc kubenswrapper[4949]: E0120 15:57:41.341489 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f\": container with ID starting with c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f not found: ID does not exist" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.341595 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f"} err="failed to get container status \"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f\": rpc error: code = NotFound desc = could not find container \"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f\": container with ID starting with c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f not found: ID does not exist" Jan 20 15:57:42 crc kubenswrapper[4949]: I0120 15:57:42.799904 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" path="/var/lib/kubelet/pods/3cf0a23e-747e-442b-b15a-d9db29607be8/volumes" Jan 20 15:57:57 crc kubenswrapper[4949]: I0120 15:57:57.152099 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:57:57 crc kubenswrapper[4949]: I0120 15:57:57.152761 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.153092 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.153901 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.153968 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.154842 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.154936 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532" gracePeriod=600 Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.742115 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532" exitCode=0 Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.742199 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532"} Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.742900 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"bc3fef792b2aaf3deb6e4efaa740bd3b894f193b180d2c2f9cdb8c064d84fc34"} Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.742966 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.184703 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5"] Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.186912 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="gather" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187022 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="gather" Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.187112 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="extract-content" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187188 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="extract-content" Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.187292 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="extract-utilities" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187369 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="extract-utilities" Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.187458 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="registry-server" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187552 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="registry-server" Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.187650 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="copy" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187727 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="copy" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.188008 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="copy" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.188098 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="gather" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.188198 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="registry-server" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.189061 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.191822 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.192346 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.194033 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5"] Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.218489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.218613 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.219495 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.321865 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.321945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.322077 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.324339 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.334697 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.408780 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.519611 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.965046 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5"] Jan 20 16:00:00 crc kubenswrapper[4949]: W0120 16:00:00.972149 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod598c0ad9_75c0_46a0_9489_ac71a51debee.slice/crio-90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a WatchSource:0}: Error finding container 90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a: Status 404 returned error can't find the container with id 90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a Jan 20 16:00:01 crc kubenswrapper[4949]: I0120 16:00:01.727833 4949 generic.go:334] "Generic (PLEG): container finished" podID="598c0ad9-75c0-46a0-9489-ac71a51debee" containerID="359ca9b84737be2580278b549798d59dd6f1a73becc824d866173184a0cdd102" exitCode=0 Jan 20 16:00:01 crc kubenswrapper[4949]: I0120 16:00:01.727899 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" event={"ID":"598c0ad9-75c0-46a0-9489-ac71a51debee","Type":"ContainerDied","Data":"359ca9b84737be2580278b549798d59dd6f1a73becc824d866173184a0cdd102"} Jan 20 16:00:01 crc kubenswrapper[4949]: I0120 16:00:01.728176 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" event={"ID":"598c0ad9-75c0-46a0-9489-ac71a51debee","Type":"ContainerStarted","Data":"90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a"} Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.101705 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.177287 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") pod \"598c0ad9-75c0-46a0-9489-ac71a51debee\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.177580 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") pod \"598c0ad9-75c0-46a0-9489-ac71a51debee\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.177728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") pod \"598c0ad9-75c0-46a0-9489-ac71a51debee\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.178732 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume" (OuterVolumeSpecName: "config-volume") pod "598c0ad9-75c0-46a0-9489-ac71a51debee" (UID: "598c0ad9-75c0-46a0-9489-ac71a51debee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.184950 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "598c0ad9-75c0-46a0-9489-ac71a51debee" (UID: "598c0ad9-75c0-46a0-9489-ac71a51debee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.185080 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724" (OuterVolumeSpecName: "kube-api-access-cp724") pod "598c0ad9-75c0-46a0-9489-ac71a51debee" (UID: "598c0ad9-75c0-46a0-9489-ac71a51debee"). InnerVolumeSpecName "kube-api-access-cp724". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.280880 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") on node \"crc\" DevicePath \"\"" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.281165 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.281246 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.747973 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" event={"ID":"598c0ad9-75c0-46a0-9489-ac71a51debee","Type":"ContainerDied","Data":"90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a"} Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.748026 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.748077 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:04 crc kubenswrapper[4949]: I0120 16:00:04.188610 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 16:00:04 crc kubenswrapper[4949]: I0120 16:00:04.198931 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 16:00:04 crc kubenswrapper[4949]: I0120 16:00:04.801075 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" path="/var/lib/kubelet/pods/574a1f73-b7b1-4ff1-9621-3c13ad507d66/volumes" Jan 20 16:00:19 crc kubenswrapper[4949]: I0120 16:00:19.644258 4949 scope.go:117] "RemoveContainer" containerID="1e5c2f6206c81a356513a5962ceabe287f73be62df3cd8a2f36dfc56324aef5b" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515133723237024453 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015133723240017362 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015133712522016506 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015133712522015456 5ustar corecore